Can you help our image selection algorithm to get better?
We’re building a very cool tool that recommends images for a piece of text. Can you spare 5 mins to help with our experiment?
TL;DR: BrightMinded is building a very cool tool that automatically recommends images in a piece of text. It will help content creators select images faster that also have a greater chance of being popular with the target audience, driving higher engagement.
To help make the algorithm that powers the tool better, we’re running an experiment that gathers data and helps to ‘teach’ it.
Would you like to take part? We’d be very grateful if you did!
It only takes around 5 minutes and can be found here:
If you’d like to read more on the background of this experiment, please read on…
Six months ago, the Brightminded skunk team chose to accept the mission of developing a tool that recommends images to accompany a piece of text.
Since the start of this mission, we have analyzed thousands of content pieces to explore how an image can reinforce a piece of text. This has led to some successes; we have even used our current model to choose this blog post’s image.
However, something that is much harder to understand from looking only at a piece of text and a picture is the person behind the selection.
That’s where I come in.
As an applied cognitive scientist, my job is to use my understanding of human behaviour to explore decision making.
Most recently, the team and I have been investigating the concept of predicting image popularity.
Thousands of images are uploaded to the internet every minute through various platforms. Some images receive no views, while others receive millions of views.
If our tool recommended an image for your content, I assume you would prefer we recommend the image that is more likely to receive a million views?
To recommend those images, we first need to be able to predict which picture will be popular. In recent years, predicting image popularity has primarily been researched using interaction data (likes and click-throughs) and has boasted some success (Niu et al., 2012).
The extent to which a prediction model is successful is measured using an accuracy score (such as 67%), which is essentially the number of times the model correctly guesses the number of likes on images it had never seen before.
Some recent models have reported high accuracy for this task. In 2014, McParlane and colleagues expanded on Niu’s research by additionally accounting for both views and comments, achieving an accuracy of up to 76%. Others have even expanded on this to include information on the user posting the image in their model, achieving an accuracy of up to 90% (Hu, Yamasaki, & Aizawa, 2016).
Despite all this research on predicting popularity, there is still more we need to explore and understand.
What more do we need to know?
Firstly, the currently available research predominantly focuses on the popularity of the photo in isolation. In our case, we need to predict the popularity of an image alongside a piece of text.
Recent research has demonstrated that an image’s popularity is more accurately estimated when considering both textual and visual features of the post (Hessel, Lee & Mimno, 2017). We aim to extend upon this research, which investigated short texts such as image captions, by exploring images concerning longer pieces of written text.
Secondly, current research primarily predicts popularity using data from social media platforms such as Flickr and Instagram.
On the other hand, we want to understand how text and images can be combined for more significant impact and popularity, exploring both objective ( e.g. SEO) and subjective criteria (the writer’s intention, style, personality).
So now we have arrived at my mission here today. I need to answer the question:
“What images do people tend to prefer alongside a piece of written text (such as a blog excerpt)?”
How do we plan to explore this question?
I am approaching this question the best way I know how: explore human behaviour itself.
So, I have designed an experiment to measure people’s image preferences in relation to the text. Participants are asked to read a blog excerpt and then rate which images they would prefer to use alongside it.
This experiment will provide us with data that we can use to train our own prediction models for image selection.
So why don’t you try yourself? You can really help us make progress by taking our experiment, so we are grateful for your time!
The experiment only takes 5 minutes – click the button below to get started!
Hessel, J., Lee, L., & Mimno, D. (2017, April). Cats and captions vs. creators and the clock: Comparing multimodal content to context in predicting relative popularity. In Proceedings of the 26th International Conference on World Wide Web (pp. 927-936).
Hu, J., Yamasaki, T., & Aizawa, K. (2016, May). Multimodal learning for image popularity prediction on social media. In 2016 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW) (pp. 1-2). IEEE.
McParlane, P. J., Moshfeghi, Y., & Jose, J. M. (2014, April). ” Nobody comes here anymore, it’s too crowded”; Predicting Image Popularity on Flickr. In Proceedings of international conference on multimedia retrieval (pp. 385-391).
Niu, X., Li, L., Mei, T., Shen, J., & Xu, K. (2012, July). Predicting image popularity in an incomplete social media community by a weighted bi-partite graph. In 2012 IEEE International Conference on Multimedia and Expo (pp. 735-740). IEEE.