<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[CIASS]]></title><description><![CDATA[Centre for Image Analysis in the Social Sciences]]></description><link>https://ciass.uni-konstanz.de/</link><generator>Ghost 4.41</generator><lastBuildDate>Thu, 10 Jul 2025 07:58:39 GMT</lastBuildDate><atom:link href="https://ciass.uni-konstanz.de/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Seeing is Deceiving? AI-Manipulated Images and Protest Size Estimates]]></title><description><![CDATA[This project examines the effect of AI-modified images on public perception by examining the effect of manipulated protest images on estimated crowd sizes.]]></description><link>https://ciass.uni-konstanz.de/seeing-is-deceiving-ai-manipulated-images-and-protest-size-perception/</link><guid isPermaLink="false">684fe8abee79b500013fbb3b</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Mon, 16 Jun 2025 09:57:18 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2025/06/method-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://ciass.uni-konstanz.de/content/images/2025/06/method-2.png" alt="Seeing is Deceiving? AI-Manipulated Images and Protest Size Estimates"><p>The size of the crowd at large political protests is an extremely political number, since it is usually interpreted as the level of popular support for a regime or a certain political issue. This is particularly the case in autocracies, where protest is one of the few ways for citizens to express their political preferences. Not surprisingly, it is in the interest of political actors to manipulate perceptions of the crowd size, in order to boost support for, or to delegitimize, a particular political issue. This paper studies how perceptions of the crowd size can be manipulated with AI-generated protest images, which, due to the difficulty of detection, is one of the more subtle ways in which AI-generated visual content can be used. In an experiment, participants are asked to rate the size of the protest crowd from a series of social media images. These images are manipulated to display larger (or smaller) crowd sizes using a generative image model. Results show that manipulation works, and that AI-inflated (or reduced) crowd portrayals lead to higher (or lower) crowd estimates as compared to the unmodified images. These results demonstrate the effectiveness of AI-generated visual content in shaping popular perceptions of political events on social media.</p><h2 id="research-article">Research Article</h2><p>First insights into the results can be found in the following preprint.</p><p><a href="https://doi.org/10.31235/osf.io/jm7bf_v1">Scholz, Stefan and Weidmann, Nils B.. 2025. &#x201C;Seeing is Deceiving? AI-Manipulated Images and Protest Size Estimates.&#x201D; SocArXiv. July 2. doi:10.31235/osf.io/jm7bf_v1.</a></p><h2 id="replication-materials">Replication Materials</h2><p>The code and data for the article are currently available on <a href="https://github.com/ciass-konstanz/protest-manipulations">Github</a>. The final replication materials will be published upon acceptance of the manuscript. </p>]]></content:encoded></item><item><title><![CDATA[Predicting Protest Escalation by Exploiting Short-Term Dynamics]]></title><description><![CDATA[A prediction framework to predict the dynamics of protest using protest images from social media. The measurement of the protest characteristics per hour also allows us to predict the protest dynamics for the hours and days to follow. ]]></description><link>https://ciass.uni-konstanz.de/predicting-protest-escalation/</link><guid isPermaLink="false">684fe4bdee79b500013fbac0</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Tue, 03 Jun 2025 09:45:00 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2025/06/method.png" medium="image"/><content:encoded><![CDATA[<img src="https://ciass.uni-konstanz.de/content/images/2025/06/method.png" alt="Predicting Protest Escalation by Exploiting Short-Term Dynamics"><p>Protests can remain peaceful for long periods but then suddenly escalate into violent clashes, influencing the following events. Previous studies have focused on predicting whether protest events will take place or not, but they have not predicted the descriptive characteristics of these events. This gap might be attributed to a reliance on highly aggregated event data, which overlooks abrupt changes that can happen within individual days &#x2013; such as a protester throwing a stone at a police officer. For this reason, this project aims to forecast the escalation of protests on individual days and cities. To incorporate within-day dynamics into the prediction models, these dynamics are measured using protest images from social media. By applying computer vision techniques to these images, they can provide detailed estimates of the number of protesters, the tactics employed by the protesters, and the tactics opposed by law enforcement. This approach is demonstrated using a new dataset that includes 13 protest periods and 22,479 protest images. To validate the effectiveness of the new short-term indicators, the predictive performance of machine-learning models is compared against models that cannot draw on them. The results on the hold-out sample show that incorporating short-term dynamics improves the prediction of protest characteristics. While these improvements appear to be marginal, the approach reveals numerous opportunities for forecasting the escalation of protests, which has implications for their mitigation and prevention.</p><h2 id="research-article">Research Article</h2><p>First insights into the results can be found in the following preprint.</p><p><a href="https://doi.org/10.31235/osf.io/vh8at_v1">Scholz, Stefan. 2025. &#x201C;Predicting Protest Escalation by Exploiting Short-term Dynamics.&#x201D; SocArXiv. June 3. doi:10.31235/osf.io/vh8at_v1.</a></p><h2 id="replication-materials">Replication Materials</h2><p>The code and data for the article are currently available on <a href="https://github.com/ciass-konstanz/protest-dynamics">Github</a>. The final replication materials will be published upon acceptance of the manuscript. </p><p>I unfortunately cannot make the images themselves publicly available. This is due to data protection and copyright reasons. However, I will make the image archive available to colleagues in political and computer science upon request. For most research projects, however, the tabular data of the images and extracted features ought to be sufficient. If you believe you still require the archive of images and are eligible for it, please email the corresponding author.</p>]]></content:encoded></item><item><title><![CDATA[Protest Image Dataset]]></title><description><![CDATA[The Protest Image Dataset is a new data collection project that includes images from social media with an emphasis on political protests. The dataset contains, besides the images themselves, variables on location, time, as well as a hand-annotated protest variable.]]></description><link>https://ciass.uni-konstanz.de/protest-dataset/</link><guid isPermaLink="false">6408ada9abf43f0001a3825f</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Sun, 08 Dec 2024 16:18:00 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2023/03/protest-dataset-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://ciass.uni-konstanz.de/content/images/2023/03/protest-dataset-1.jpg" alt="Protest Image Dataset"><p>To advance the state of research of political movements and protests, we aim to study them through images and videos. A fundamental task is to recognize the visual features of protest. This task can already be accomplished by image processing algorithms being developed in the field of computer vision. They need a large set of annotated images to learn, for example, to distinguish a protest image from a non-protest image. Two datasets already exist for protest images, but unfortunately they are not suitable for our analyses because they either only contain images from a single country, do not match the images to any country, or do not contain any non-protest images. </p><p>For this reason, we decided that we needed to collect our own dataset. In order to do this, we first collected images from social media from different parts of the world. Secondly, we coded these images as protest images and non-protest images according to predefined criteria. Our dataset contains 141,538 protest images from 10 countries. We believe that this is the first dataset that contains protest images from a variety of regions. In addition, it is the first that allows to determine for each image if it shows a protest or not and from which country it originates. </p><h2 id="research-article">Research Article</h2><p>Further information on the collection of the dataset can be found in the following research article.</p><p><a href="https://doi.org/10.1017/pan.2024.18" rel="nofollow">Scholz, S., Weidmann, N. B., Steinert-Threlkeld, Z. C., Keremoglu, E., &amp; Goldl&#xFC;cke, B. (2025). Improving Computer Vision Interpretability: Transparent Two-Level Classification for Complex Scenes. Political Analysis, 33 (2), 107&#x2013;121.</a></p><h2 id="release">Release</h2><p>We have released the first version of the dataset, which includes tabular data on the images and the detected segments within these images. You can download this dataset from the <a href="https://doi.org/10.7910/DVN/TFTEF2">Harvard Dataverse</a>.</p><p>We unfortunately cannot make the images themselves publicly available.This is due to data protection and copyright reasons. However, we will make the image archive available to colleagues in political and computer science upon request. For most research projects, however, the tabular data of the images and segments ought to be sufficient. If you believe you still require the archive of images and are eligible for it, please email the corresponding author.</p>]]></content:encoded></item><item><title><![CDATA[Transparent Two-level Classification Method for Images]]></title><description><![CDATA[We present a new method that makes the classification of images more transparent.]]></description><link>https://ciass.uni-konstanz.de/protest-segment/</link><guid isPermaLink="false">6408c3daabf43f0001a3834d</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Wed, 22 Feb 2023 17:35:00 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-3-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-3-1.jpg" alt="Transparent Two-level Classification Method for Images"><p>While existing classifiers for images reach high levels of accuracy, it is difficult to systematically assess the visual features on which they base their classification. This problem is especially pressing for complex images that contain many different types of objects. Our method detects objects present in images, creates feature vectors from those objects and uses them as input for machine learning classifiers. We tested this on a new dataset of 140,000 images to predict which ones show protest. The accuracy is roughly on par with popular CNNs. The novelty of this method is that it provides new insights for comparative politics: While persons, flags and signboard are important objects in protest images, particular features of protest differ across countries and protest episodes. Our method can detect these.</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-1.jpg" width="1200" height="900" loading="lazy" alt="Transparent Two-level Classification Method for Images" srcset="https://ciass.uni-konstanz.de/content/images/size/w600/2023/03/visual-comparison-1.jpg 600w, https://ciass.uni-konstanz.de/content/images/size/w1000/2023/03/visual-comparison-1.jpg 1000w, https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-1.jpg 1200w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-2.jpg" width="1200" height="900" loading="lazy" alt="Transparent Two-level Classification Method for Images" srcset="https://ciass.uni-konstanz.de/content/images/size/w600/2023/03/visual-comparison-2.jpg 600w, https://ciass.uni-konstanz.de/content/images/size/w1000/2023/03/visual-comparison-2.jpg 1000w, https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-2.jpg 1200w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-3.jpg" width="1200" height="900" loading="lazy" alt="Transparent Two-level Classification Method for Images" srcset="https://ciass.uni-konstanz.de/content/images/size/w600/2023/03/visual-comparison-3.jpg 600w, https://ciass.uni-konstanz.de/content/images/size/w1000/2023/03/visual-comparison-3.jpg 1000w, https://ciass.uni-konstanz.de/content/images/2023/03/visual-comparison-3.jpg 1200w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>The figure shows a comparison of a protest image (left) processed with two classification algorithms (centre, right). The first algorithm is a conventional algorithm; in the image (centre), areas are highlighted based on which protest can be recognised. The second algorithm is the segmentation algorithm we developed. The objects it uses are highlighted in the image (right).</figcaption></figure><h2 id="research-article">Research Article</h2><p>Further information on the method can be found in the following research article.</p><p><a href="https://doi.org/10.1017/pan.2024.18" rel="nofollow">Scholz, S., Weidmann, N. B., Steinert-Threlkeld, Z. C., Keremoglu, E., &amp; Goldl&#xFC;cke, B. (2025). Improving Computer Vision Interpretability: Transparent Two-Level Classification for Complex Scenes. Political Analysis, 33 (2), 107&#x2013;121.</a></p><h2 id="replication-materials">Replication Materials</h2><p>The replication code, model weights, and data for this article have been published on <a href="https://github.com/ciass-konstanz/protest-segments">GitHub</a> and the <a href="https://doi.org/10.7910/DVN/TFTEF2">Harvard Dataverse</a>.</p><h2 id="demo">Demo</h2><p>If you just want to try out our method with a couple of your images, we recommend you to view the demo. The demo allows you to upload an image, define a vocabulary of objects, an aggregation, and returns the segmented image and the respective feature vector. The demo can be used via an <a href="https://huggingface.co/spaces/ciass/protest-segments">interactive demonstration application with reduced functionality</a> or via an <a href="https://huggingface.co/spaces/ciass/protest-segments?view=api">application programming interface</a> using Hugging Face. </p>]]></content:encoded></item><item><title><![CDATA[Self-Portrayal of Leaders on Social Media]]></title><description><![CDATA[Social media platforms allow leaders to raise their profiles and directly communicate with citizens. The incentives for leaders to visually presenting themselves still differ.]]></description><link>https://ciass.uni-konstanz.de/how-leaders-portray-themselves-on-social-media/</link><guid isPermaLink="false">6408b9c9abf43f0001a382d5</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Wed, 21 Sep 2022 10:10:00 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2023/03/trump.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://ciass.uni-konstanz.de/content/images/2023/03/trump.jpg" alt="Self-Portrayal of Leaders on Social Media"><p>To advance political communication research, we are examining social media, which allow political leaders to profile themselves through the news and communicate directly with citizens. Previous research has shown how effective visual representations are in influencing citizens&apos; judgements of their politicians. To analyze these representations, we identified 602 heads of state who have governed in one of the 193 United Nations states in the last 10 years. They posted 1,317,885 images and videos on Twitter, which we collected for our dataset.</p><p>To study how heads of state portray themselves in the pictures they post, we use a face recognition algorithm. The algorithm checks whether a face in the picture resembles the politician&apos;s face. This algorithm was developed by a team of computer vision researchers, we only apply it to the faces of politicians in their own pictures. We further investigate whether politicians present themselves in company with other people and how high the proportion of women is. This requires another algorithm that can decide whether it is a woman&apos;s or a man&apos;s face.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://ciass.uni-konstanz.de/content/images/2023/03/retinaface-arcface-4.jpg" class="kg-image" alt="Self-Portrayal of Leaders on Social Media" loading="lazy" width="1419" height="966" srcset="https://ciass.uni-konstanz.de/content/images/size/w600/2023/03/retinaface-arcface-4.jpg 600w, https://ciass.uni-konstanz.de/content/images/size/w1000/2023/03/retinaface-arcface-4.jpg 1000w, https://ciass.uni-konstanz.de/content/images/2023/03/retinaface-arcface-4.jpg 1419w" sizes="(min-width: 720px) 720px"><figcaption>You can see how the face of Jo&#xE3;o Louren&#xE7;o (Head of State of Angola) is identified on this image and eight other faces are recognised.</figcaption></figure><h2 id="research-article">Research Article</h2><p>First insights into the results can be found in the following preprint.</p><p><a href="https://doi.org/10.31235/osf.io/eckav_v1">Keremoglu, Eda, Stefan Scholz, and Nils B. Weidmann. 2025. &#x201C;Visual Politics in the Digital Age: A Comparative Analysis of Democratic and Autocratic Leaders on Social Media.&#x201D; SocArXiv. May 14. doi:10.31235/osf.io/eckav_v1.</a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[Wickedonna]]></title><description><![CDATA[Two activists reported in the Wickedonna blog about protests in China. They collected social media posts for over 74,000 protest events containing images of the protesters, security forces, banners and more. We will improve the prediction of protest in images by using the images from this blog.]]></description><link>https://ciass.uni-konstanz.de/wickedonna/</link><guid isPermaLink="false">62581da19f41890001175d4b</guid><category><![CDATA[Project]]></category><dc:creator><![CDATA[Stefan Scholz]]></dc:creator><pubDate>Wed, 20 Oct 2021 08:22:50 GMT</pubDate><media:content url="https://ciass.uni-konstanz.de/content/images/2021/12/wickedonna2-1.png" medium="image"/><content:encoded><![CDATA[<h2 id="lack-of-protest-image-datasets">Lack of protest image datasets</h2><img src="https://ciass.uni-konstanz.de/content/images/2021/12/wickedonna2-1.png" alt="Wickedonna"><p>Very few images have been collected so far that depict protests. The existing datasets cover mostly images from all types of social events, with protests often underrepresented among concerts, conferences, exhibitions, sports, and theaters. The one or two datasets that specialize in protest events are limited by the small number of protest images though. We are in excited that Christian Goebel and his team at the University of Vienna have annotated a new dataset of protest images. They collected protest images by scraping the images from the Wickedonna dataset. The non-protest images were collected by taking images from Weibo posts (1) whose posts had a low probability of protest based on a text classification and (2) hand-verified. From their data collection, a dataset with about 20,000 protest images and 20,000 non-protest images was obtained. More information about the dataset can be found in his article (referenced below). By using this dataset, our aim is to understand what matters in a protest image dataset.</p><h2 id="predict-protest-in-images">Predict protest in images</h2><p>In addition to the problem of images, Christian and his team have already tackled another problem &#x2013; the prediction of protest. Ultimately, in peace and conflict research, we want to improve our understanding of the emergence, change, and dissolution of political protests. Images are a source of information that has rarely been used yet. Reasons why images have not yet been used are probably because the corresponding methods are still quite new and additionally require large datasets and many computational resources. Christian has nevertheless trained a convolutional neural network, which achieves for the prediction on his validation dataset an accuracy of 92.23%. More information about the model can also be found in his article (referenced below). Our goal is to optimize this existing model and to make the underlying convolutional neural network explainable with its filters and weights.</p><hr><h2 id="references">References</h2><ul><li><a href="https://www.tandfonline.com/doi/full/10.1080/10670564.2020.1790897">G&#xF6;bel, C. (2021). The Political Logic of Protest Repression in China. Journal of Contemporary China, 30(128), 169&#x2013;185. doi:10.1080/10670564.2020.1790897</a></li></ul>]]></content:encoded></item></channel></rss>