<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" > <channel> <title>social media content - HSToday</title> <atom:link href="https://www.hstoday.us/tag/social-media-content/feed/" rel="self" type="application/rss+xml" /> <link>https://www.hstoday.us</link> <description>HSToday</description> <lastBuildDate>Tue, 25 Oct 2022 03:13:41 +0000</lastBuildDate> <language>en-US</language> <sy:updatePeriod> hourly </sy:updatePeriod> <sy:updateFrequency> 1 </sy:updateFrequency>  <item> <title>Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research</title> <link>https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research</link> <comments>https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/#respond</comments> <dc:creator><![CDATA[Dennis Klinkhammer]]></dc:creator> <pubDate>Tue, 25 Oct 2022 03:13:41 +0000</pubDate> <category><![CDATA[Counterterrorism]]></category> <category><![CDATA[Featured]]></category> <category><![CDATA[Terrorism Study]]></category> <category><![CDATA[Facebook]]></category> <category><![CDATA[online extremism]]></category> <category><![CDATA[online radicalization]]></category> <category><![CDATA[social media]]></category> <category><![CDATA[social media content]]></category> <category><![CDATA[social media regulation]]></category> <category><![CDATA[Twitter]]></category> <guid isPermaLink="false">https://www.hstoday.us/?p=160654</guid> <description><![CDATA[<a href="https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/" title="Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-11.09.01-PM-150x150.png" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>A longitudinal perspective takes into account changing size and topics of echo chambers over time and considers that radicalization is a process.</p> The post <a href="https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/">Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></description> <content:encoded><![CDATA[<a href="https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/" title="Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-11.09.01-PM-150x150.png" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>Social media platforms like Twitter have demonstrated a continuous increase of active users over the most recent years (Pereira-Kohatsu et al. 2019). An average of 500 million tweets per day combined with a low threshold regarding the participation leads to a high diversity of opinions (Koehler 2015). Platforms such as Facebook, YouTube and Instagram record even more activity with increasing growth rates over time (Dixon 2022a, Dixon 2022b). Furthermore, Twitter as well as other social media platforms are not to be interpreted as one singular social network, but as several social sub-networks, which enable users to exchange information with each other. Some of these sub-networks are so-called echo chambers (Bright 2017). Echo chambers can arise through an accumulation of thematically related comments, replies, likes and followers on social media platforms. Usually users participate within echo chambers that correspond with their own opinion, and the so-called echo arises. Since most social media platforms allow its users to switch quickly and uncomplicated from one social sub-network to another (Prior 2005), it is to be assumed that echo chambers are most likely to arise and shape the inherent and intertemporal dynamics on social media platforms, whereby the topics and the intensity of communication about these topics can change over time. Within an echo chamber one’s own opinion can be confirmed and this confirmation bias can lead to distortions regarding the perception of social phenomena outside of a social media platform (Cinelli et al. 2021; Jacobs & Spierings 2018). It has already been confirmed that these confirmation biases within echo chambers – especially those with political agendas – can lead to a gradual accumulation from radical to extreme to anti-constitutional opinions (O’Hara & Stevens 2015).</p> <p>However, according to Neumann (2013), extreme and anti-constitutional opinions are context-specific and must be compared and adapted to the accepted socio-political realities of the observed society. Extremism as phenomena emerges from the process of radicalization over time and can be divided into cognitive and violent extremism which could ultimately endanger the life, freedom and rights of others (Wiktorowicz 2005; Neumann et al. 2018). The process of radicalization is particularly favored by the fact that echo chambers enable users to perform a continuing defamation of dissenters and in some cases these defamation strategies follow the aim of political influence as well (Glaser & Pfeiffer 2017). Some specific forms of negative communication are called hate speech and aim at the exclusion of single persons or groups of persons because of their ethnicity, sexual orientation, gender identity, disability, religion or political views (Pereira-Kohatsu et al. 2019; Warner & Hirschberg 2012). According to Kay (2011) and Sunstein (2006), extremist networks show a low tolerance toward individuals and groups who think differently and are generally less cosmopolitan. As a result of these echo chambers, hate speech as well as radicalizing elements show an increasing number on social media platforms (Reichelmann et al. 2020; Barberá et al. 2015). Therefore, social media platforms are often accused of being a platform for polarizing, racist, antisemitic or anti-constitutional content (Awan 2017; Gerstenfeld et al. 2003). This content is usually also freely accessible to children and young people and it seems that a small minority of extremists is able to shape and make use of the intertemporal dynamics on social media platforms in order to spread their point of view beyond their echo-chamber (Machackova et al. 2020). One could also say that these users seem to have mastered the rules of social media platforms.</p> <h4>Longitudinal analyses as methodological approach</h4> <p>Social media elements, such as comments, replies, likes and followers, are used in radicalization research in order to investigate communication patterns inside social networks and social subnetworks. They allow focus on the role of individual users and what influence the content of their social media-based behavior might have on the underlying structures of a social network or social sub-network (Klinkhammer 2020, Wienigk & Klinkhammer 2021). Some research methodologists assumed, not only in the context of radicalization research, that social media platforms could also become a sensor of the real world and provide important information for criminological investigations and predictions (Scanlon & Gerber 2015; Sui et al. 2014). Corresponding research has been published by the German Police University (Hamachers et al. 2020) and five studies represent the scientific efforts regarding the identification of hate speech and extremism on social media platforms (Charitidis et al. 2020; Mandl et al. 2019; Wiegand et al. 2018; Bretschneider & Peters 2017; Ross et al. 2017). Some of these research papers refer to mathematical and statistical methods in order to identify hate speech as well as extremism. So far, regression models and classification models are most commonly used in machine learning-based approaches and a few approaches are based on simple neural networks (Schmidt & Wiegand 2017), whereas more sophisticated approaches make use of convolutional neural networks (Hamachers et al. 2020).</p> <p>While methodologically it is feasible to count the number of hate speech- associated comments and radicalizing elements and, for example, to study the impact of anti-hate laws regarding social media platforms by using semi-automated and merely descriptive approaches, the process of automated identification without human supervision has proven to be error-prone. For example, means and variances used as reference values in many of these cross-sectional approaches only lead to a correct identification in the short term (Klinkhammer 2020). However, the same approaches can lead to false positive or false negative results when conducted again at a later point in time. For example, at a certain point in time, a user writes an above-average number of comments. The average value is derived from the patterns of communication within the echo chamber of the user. At another point in time, however, this average value may have changed so that the original user can no longer be considered above average. In this example quantitative indicators have changed, but not the attitude the user expressed in a comment. This poses a challenge for the monitoring of social media platforms and shall be further illustrated with an application example: With regard to the identitarian movement in Germany, it could be shown that the blocking of accounts of extremist users, as provided by a newly drafted anti-hate law, led to a temporal reduction in hate speech and extremist content. However, this only applied immediately after the accounts have been blocked. Only a short time later, followers – who had not been blocked – mobilized, increased their social media behavior and switched to different sub-networks and echo chambers. As a result, there were more hate speech and extremist comments than before the blocking (Wenigk & Klinkhammer 2021). These intertemporal dynamics could only be discovered by using a longitudinal approach.</p> <p>Furthermore, when it comes to a cross-sectional approach, the fact that someone writes more comments, gets and gives a lot of replies as well as likes and has a large number of followers does not necessarily indicate that a radicalization process has started or is ongoing, even if the content is primarily polarizing, racist, antisemitic or anti-constitutional. This might be due to the fact that narratives and counter narratives tend to clash on social media platforms, especially in the course of interventions like the application of anti-hate laws. For example, a longitudinal perspective reveals an increased scattering within the patterns of communication as reaction to counter narratives. As a result, the amplitude of intertemporal dynamics is influenced as well. Although this effect does not seem to be permanent, it tends to disguise relevant actors within the increased scattering (Figure 1).</p> <p><img fetchpriority="high" decoding="async" class="aligncenter wp-image-160655 size-large" src="https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-1024x685.png" alt="" width="696" height="466" srcset="https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-1024x685.png 1024w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-300x201.png 300w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-768x514.png 768w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-150x100.png 150w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-696x465.png 696w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-1068x714.png 1068w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-628x420.png 628w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM-595xh.png 595w, https://www.hstoday.us/wp-content/uploads/2022/10/Screen-Shot-2022-10-24-at-8.26.12-PM.png 1292w" sizes="(max-width: 696px) 100vw, 696px" /></p> <p>Therefore, cross-sectional analyses, as they are conducted in radicalization research these days, might lack in terms of reliability as scientific research criteria. Again, the decisive factor could be the intertemporal dynamics on social media platforms (Klinkhammer 2022; Grogan 2020). Accordingly, in respect to the changing size and topics of echo chambers over time and considering that radicalization is a process, a longitudinal perspective seems recommended (Greipl et al. 2022).</p> <h4>Intertemporal dynamics: Light and shadow for radicalization research</h4> <p>Taking into account the permeability of echo chambers on social media platforms and the resulting intertemporal dynamics, it seems that a longitudinal approach is necessary in order to depict process-based phenomena in the context of radicalization research. A longitudinal analysis of collected tweets from Jan. 6, 2021, the day the U.S. Capitol in Washington was stormed, was able to depict these intertemporal dynamics. The aim was to answer the question of whether Trumpists, Republicans and Democrats can be identified over the course of the day based on their social media behavior. Using available retrospective data made it possible to reconstruct the course of the day on social media platforms precisely, but supporters and opponents of this political event turned out to be more similar in their patterns of communication than expected (Klinkhammer 2022). In fact, they turned out to be so similar that it was almost impossible to differentiate them solely based upon their quantitative characteristics shown on social media platforms. In detail, the social media-based behavior of supporters and opponents seem to vary only within the same range, or as statisticians would say: Over time they vary within the inherent confidence interval of a social media platform. This is due to the fact that the intertemporal dynamics are affected by political events and corresponding social media comments, replies, likes and followers vice versa. As a result, in this example, the quantitative characteristics from Trumpists, Republicans and Democrats turned out to be quite similar regarding the storming of the U.S. Capitol in Washington.</p> <p>This leads to the assumption that if a political event elicits increased activity on social media platforms from one side, it appears to do the same for the other side. Accordingly, the intertemporal dynamics create synchronous highs and lows regarding that political event and its representation on social media platforms. This influence is not exclusively due to political or similar events: Topics with different patterns of communication, like sexual content, can significantly influence the intertemporal dynamics as well, as these not only affect one echo chamber, but can spread throughout the social media platform as a whole. As a result, the permeability of social media platforms like Twitter and the interaction between different echo chambers does not only affect the intertemporal dynamics globally (Cinelli et al. 2021), but also partially within the echo chambers. Thereby, relevant phenomena for the context of radicalization research are at risk to be overshadowed by other political events, topics and patterns of communication. The assumption that users who support such events can be identified by above-average quantitative characteristics would therefore be wrong. Furthermore it would be wrong to use means and variances – most commonly used values within social media- based radicalization research – without considering the intertemporal dynamics framed by the context. This could result in false-positive identifications in the context of radicalization research.</p> <p>As a result, longitudinal analyses solely on basis of quantitative characteristics seem less suitable for the targeted identification of individual users on social media platforms, but more suitable for depicting a development over time within echo chambers and on social media platforms as a whole. This still seems to be in accordance with the findings of Grogan (2020) as well as the suggestion made by Greipl et al. (2022) to conduct longitudinal analyses in radicalization research, albeit they need to be conducted cautiously and prudently. So far, intertemporal dynamics and ongoing developments can be mapped almost in real time via longitudinal analyses, which offers the possibility for qualitative inspections of social media comments, which seems necessary. Accordingly, the importance of qualitative perspectives was appropriately emphasized in the anthology of Hamachers et al. (2020), yet many contributions turn out to be cross-sectional and exclusively quantitative. Finally the question arises of whether the similarities found between supporters and opponents of the storming of the U.S. Capitol in Washington are not merely a result of the predefined structures of social media platforms, which specify the same input format for all their users and thus contribute to this challenge all along. Accordingly, a profound social media monitoring should always address the question of whether the data would enable similar insights if the measurements were repeated another time and whether comparable conclusions would be possible. The current state of research raises doubts.</p> <p> </p> <h6>Sources</h6> <h6>Awan, I. (2017): “Cyber-Extremism: Isis and the Power of Social Media.” Society, 54 (3). Online: https://link.springer.com/article/10.1007/s12115-017-0114-0</h6> <h6>Barberá, P.; Jost, J.; Nagler, J.; Tucker, J.; & R. Bonneau (2015): “Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?” Psychological Science, 26 (10). Online: https://doi.org/10.1177/0956797615594620</h6> <h6>Bretschneider, U. & R. Peters (2017): “Detecting Offensive Statements towards Foreigners in Social Media.” International Conference on System Sciences: http://dx.doi.org/10.24251/HICSS.2017.268</h6> <h6>Bright, J. (2017): “Explaining the emergence of echo chambers on social media: the role of ideology and extremism.” Online: https://arxiv.org/abs/1609.05003</h6> <h6>Charitidis, P.; Doropoulos, S.; Vologiannidis, S.; Papastergiou, I., & S. Karakeva (2020): “Towards countering hate speech against journalists on social media.” Online Social Networks and Media, 17. Online: https://arxiv.org/abs/1912.04106</h6> <h6>Cinelli, M; Morales, G. D. F.; Galeazzi, A.; Quattrociocchi, W. & M. Starnini (2021): “The echo chamber effect on social media.” Online: https://doi.org/10.1073/pnas.2023301118</h6> <h6>Dixon, S. (2022a): “Number of global social network users 2018-2027.” Statista. Online: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/</h6> <h6>Dixon, S. (2022b): “Most popular social networks worldwide as of January 2022, ranked by number of monthly active users.” Statista. Online: https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/</h6> <h6>Gerstenfeld, P.; Grant, D. & C.-P. Chiang (2003): “Hate Online: A Content Analysis of Extremist Internet Sites.” Analyses of Social Issues and Public Policy, 1. Online: https://doi.org/10.1111/j.1530-2415.2003.00013.x</h6> <h6>Glaser, S. & T. Pfeiffer (2017): “Erlebniswelt Rechtsextremismus: modern – subversiv – hasserfüllt. Hintergründe und Methoden für die Praxis der Prävention.” 5. Auflage. Wochenschau. Frankfurt am Main.</h6> <h6>Greipel, S.; Hohner, J.; Schulze, H. & D. Rieger (2022): “Radikalisierung im Internet: Ansätze zur Differenzierung, empirische Befunde und Perspektiven zu Online-Gruppendynamiken.” In: MOTRA-Monitor 2021. Bundeskriminalamt. Wiesbaden.</h6> <h6>Grogan, M. (2020): “NLP from a time series perspective. How time series analysis can complement NLP.” Towards Data Science. Online: https://towardsdatascience.com/nlp-from-a-time-series-perspective-39c37bc18156</h6> <h6>Hamachers, A.; Weber, K. & S. Jarolimek (2020): “Extremistische Dynamiken im Social Web.” Verlag für Polizeiwissenschaft. Frankfurt am Main.</h6> <h6>Jacobs, K. & N. Spierings (2018): “A populist paradise? Examining populists’ Twitter adoption and use.” Information, Communication & Society, 22 (12). Online: https://doi.org/10.1080/1369118X.2018.1449883</h6> <h6>Kay, J. (2011): “Among the Truthers: A Journey Through America’s Growing Conspiracist Underground.” HarperCollins. New York.</h6> <h6>Klinkhammer, D. (2020): “Analysing Social Media Network Data with R: Semi-Automated Screening of Users, Comments and Communication Patterns.” Online: https://arxiv.org/abs/2011.13327</h6> <h6>Klinkhammer, D. (2022): “Longitudinal Sentiment Analyses for Radicalization Research: Intertemporal Dynamics on Social Media Platforms and their Implications.” Online: https://arxiv.org/abs/2210.00339</h6> <h6>Koehler, D. (2015): “The Radical Online. Individual Radicalization Processes and the Role of the Internet.” Journal for Deradicalization, 15 (1), 116 – 134.</h6> <h6>Machackova, H.; Blaya, C.; Bedrosova, M.; Smahel, D. & E. Staksrud (2020): “Children’s experiences with cyberhate.” Online: https://www.lse.ac.uk/media-and-communications/assets/documents/research/eu-kids-online/reports/eukocyberhate-22-4-final.pdf</h6> <h6>Mandl, T.; Modha, S.; Majumder, P.; Patel, D.; Dave, M.; Mandlia, C. & A. Patel (2019): “Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages.” 11th Forum for Information Retrieval Evaluation. Online: https://doi.org/10.1145/3368567.3368584</h6> <h6>Neumann, P. (2013): “The Trouble with Radicalization.” International Affairs, 89 (4). Online: https://doi.org/10.1111/1468-2346.12049</h6> <h6>Neumann, P.; Winter, C.; Meleagrou-Hitchens, A.; Ranstorp, M. & L. Vidino (2018): “Die Rolle des Internets und sozialer Medien für Radikalisierung und Deradikalisierung.” PRIF Report, 9.</h6> <h6>O’Hara, K., & D. Stevens (2015): “Echo Chambers and Online Radicalism: Assessing the Internet’s Complicity in Violent Extremism.” Policy & Internet, 7 (4). Online: https://doi.org/10.1002/poi3.88</h6> <h6>Pereira-Kohatsu, J. C.; Quijano-Sánchez, L; Liberatore, F. & M. Camacho-Collados (2019): “Detecting and Monitoring Hate Speech in Twitter.” Sensors, 19 (21).</h6> <h6>Prior, M. (2005): “News vs. Entertainment: How Increasing Media Choice Widens Gaps in Political Knowledge and Turnout.” American Journal of Political Science, 49 (3), 577 – 592.</h6> <h6>Reichelmann, A:; Hawdon, J.; Costello, M.; Ryan, J.; Blaya, C.; Llorent, V.; Oksanen, A.; Räsänen, P. & I. Zych (2020): “Hate Knows No Boundaries: Online Hate in Six Nations.” Online: https://doi.org/10.1080/01639625.2020.1722337</h6> <h6>Ross, B.; Rist, M.; Carbonell, G.; Cabrera, B.; Kurowsky, N. & M. Wojatzki (2017): “Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis.” University of Duisburg-Essen Press. Duisburg-Essen.</h6> <h6>Scanlon, J. & M. Steven Gerber (2015): “Forecasting violent extremist cyber recruitment.” IEEE Transactions on Information Forensics and Security, 10 (11). Online: http://dx.doi.org/10.1109/TIFS.2015.2464775</h6> <h6>Schmidt, A., & M. Wiegand (2017): “A Survey on Hate Speech Detection using Natural Language Processing.” 5th International Workshop on Natural Language Processing for Social Media. Online: http://dx.doi.org/10.18653/v1W17- 1101</h6> <h6>Sui, X.; Chen, Z.; Wu, K.; Ren, P.; Ma, J. & F. Zhou (2014): “Social media as sensor in real world: Geolocate user with microblog.” Communications in Computer and Information Science, 496. Online: http://dx.doi.org/10.1007/978-3-662- 45924-9_21</h6> <h6>Sunstein, C. (2006): “Infotopia: How Many Minds Produce Knowledge.” Oxford University Press. Oxford.</h6> <h6>Warner, W. & J. Hirschberg (2012): “Detecting Hate Speech on the World Wide Web.” Proceedings of the Second Workshop on Language in Social Media. Online: https://aclanthology.org/W12-2103/</h6> <h6>Wiegand, M.; Siegel, M. & J. Ruppenhofer (2018): “Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language.” Saarbrücken: University of Saarland Press. Saarbrücken.</h6> <h6>Wienigk, R. & D. Klinkhammer (2021): “Online-Aktivitäten der Identitären Bewegung auf Twitter – Warum Kontensperrungen die Anzahl an Hassnachrichten nicht reduzieren.” Forum Kriminalprävention. Online: https://www.forum-kriminalpraevention.de/online-aktivitaeten-der-identitaeren-bewegung.html</h6> <h6>Wiktorowicz, Q. (2005): “Radical Islam Rising: Muslim Extremism in the West.” Rowman & Littlefield. London</h6>The post <a href="https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/">Monitoring Social Media Platforms: How Intertemporal Dynamics Affect Radicalization Research</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></content:encoded> <wfw:commentRss>https://www.hstoday.us/featured/monitoring-social-media-platforms-how-intertemporal-dynamics-affect-radicalization-research/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item> <title>New Twitter Policy Cracks Down on Misinformation</title> <link>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/?utm_source=rss&utm_medium=rss&utm_campaign=new-twitter-policy-cracks-down-on-misinformation</link> <comments>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/#respond</comments> <dc:creator><![CDATA[Homeland Security Today]]></dc:creator> <pubDate>Fri, 20 May 2022 10:15:34 +0000</pubDate> <category><![CDATA[Cybersecurity]]></category> <category><![CDATA[Ukraine]]></category> <category><![CDATA[disinformation]]></category> <category><![CDATA[fake news]]></category> <category><![CDATA[misinformation]]></category> <category><![CDATA[online propaganda]]></category> <category><![CDATA[Russia invades Ukraine]]></category> <category><![CDATA[social media content]]></category> <guid isPermaLink="false">https://www.hstoday.us/?p=152523</guid> <description><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/" title="New Twitter Policy Cracks Down on Misinformation" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2021/07/smartphone-586944_1920-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>The platform will no longer automatically recommend or emphasize posts that make misleading claims about the Russian invasion of Ukraine.</p> The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/">New Twitter Policy Cracks Down on Misinformation</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></description> <content:encoded><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/" title="New Twitter Policy Cracks Down on Misinformation" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2021/07/smartphone-586944_1920-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p><span style="font-weight: 400;">Twitter is stepping up its fight against misinformation with a new policy cracking down on posts that spread potentially dangerous false stories. The change is part of a broader effort to promote accurate information during times of conflict or crisis.</span></p> <p><span style="font-weight: 400;">The platform will no longer automatically recommend or emphasize posts that make misleading claims about the Russian invasion of Ukraine, including material that mischaracterizes conditions in conflict zones or makes false allegations of war crimes or atrocities against civilians.</span></p> <p><span style="font-weight: 400;">Under its new “crisis misinformation policy,” Twitter will also add warning labels to debunked claims about ongoing humanitarian crises, the San Francisco-based company said. Users won’t be able to like, forward or respond to posts that violate the new rules.</span></p> <p><a href="https://www.npr.org/2022/05/19/1100100329/twitter-misinformation-policy-ukraine?t=1653041291667"><span style="font-weight: 400;">Read the full story at NPR</span></a></p>The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/">New Twitter Policy Cracks Down on Misinformation</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></content:encoded> <wfw:commentRss>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-twitter-policy-cracks-down-on-misinformation/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item> <title>Treaty Needed to Address the ‘Tsunami of Hate’ Targeting Minorities on Social Media</title> <link>https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/?utm_source=rss&utm_medium=rss&utm_campaign=treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media</link> <comments>https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/#respond</comments> <dc:creator><![CDATA[Homeland Security Today]]></dc:creator> <pubDate>Tue, 16 Mar 2021 10:49:39 +0000</pubDate> <category><![CDATA[Cybersecurity]]></category> <category><![CDATA[Law Enforcement and Public Safety]]></category> <category><![CDATA[hate speech]]></category> <category><![CDATA[minorities online]]></category> <category><![CDATA[online hate]]></category> <category><![CDATA[social media content]]></category> <guid isPermaLink="false">https://www.hstoday.us/?p=126538</guid> <description><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/" title="Treaty Needed to Address the ‘Tsunami of Hate’ Targeting Minorities on Social Media" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2021/03/socialmediahate-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>Social media has too often been used with “relative impunity” to spread hate, prejudice and violence against minorities, an independent United Nations human rights expert said on March 15, calling for an international treaty to address the growing scourge.</p> The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/">Treaty Needed to Address the ‘Tsunami of Hate’ Targeting Minorities on Social Media</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></description> <content:encoded><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/" title="Treaty Needed to Address the ‘Tsunami of Hate’ Targeting Minorities on Social Media" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2021/03/socialmediahate-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p><span style="font-weight: 400;">Social media has too often been used with “relative impunity” to spread hate, prejudice and violence against minorities, an independent United Nations human rights expert said on March 15, calling for an international treaty to address the growing scourge.</span></p> <p><span style="font-weight: 400;">“The Holocaust did not start with the gas chambers, it started with hate speech against a minority”, warned Fernand de Varennes, the UN Special Rapporteur on minority issues.</span></p> <p><span style="font-weight: 400;">“Dehumanizing language, even reducing minorities to pests, normalizes violence against them and makes their persecution and eventual elimination acceptable”, he added.</span></p> <p><span style="font-weight: 400;">The UN rights envoy pointed out that in some countries, while more than three-quarters of hate speech cases target minorities, efforts to combat online occurrences seldom focus on, or even acknowledge, minorities.</span></p> <p><span style="font-weight: 400;">This can be lethal – not only leading to massive atrocities and human rights violations but also creating conditions for potential conflict.</span></p> <p><span style="font-weight: 400;">“States, civil society and social media platforms have the duty to take further steps towards the full and effective implementation of the human rights obligations involved”, said the Special Rapporteur.</span></p> <p><span style="font-weight: 400;">He said the starting point to address the scourge was “to criminalize the severest forms of hate speech, to prohibit other less ‘severe’ forms, and to take administrative and other measures to counteract less severe forms of hate flowing from prejudice, racism and intolerance which may be harmful to society at large.”</span></p> <p><span style="font-weight: 400;">He maintained that States must act quickly to counter online hate speech against minorities, including by effectively investigating and prosecuting those responsible, holding them accountable, and ensuring that victims have effective access to justice and remedy.</span></p> <p><span style="font-weight: 400;">“With regard to social media platforms, minorities should specifically be identified as priorities”, said the UN rights envoy. “Social media’s content moderation systems and community standards and any oversight or appeal entity should clearly commit to protecting vulnerable and marginalized minorities and other groups and systematically integrate fully human rights standards into the content policies and decision mechanisms of their platforms”.</span></p> <p><span style="font-weight: 400;">However, he flagged that “this is still usually not the case”.</span></p> <p><span style="font-weight: 400;">It is time for “a human rights-centred regulatory framework” that clearly outlines the obligations of States, social media businesses and others to “regulate hate speech, focusing on the most prevalent and harmful forms of hate – and that is hate against minorities”, Mr. de Varennes said.</span></p> <p><span style="font-weight: 400;">He also called for this as a matter of urgency as well as for a future legally binding instrument.</span></p> <p><a href="https://news.un.org/en/story/2021/03/1087412"><span style="font-weight: 400;">Read more at the United Nations</span></a></p>The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/">Treaty Needed to Address the ‘Tsunami of Hate’ Targeting Minorities on Social Media</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></content:encoded> <wfw:commentRss>https://www.hstoday.us/subject-matter-areas/cybersecurity/treaty-needed-to-address-the-tsunami-of-hate-targeting-minorities-on-social-media/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item> <title>New ‘Chameleon’ Attack Can Secretly Modify Content on Facebook, Twitter or LinkedIn</title> <link>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/?utm_source=rss&utm_medium=rss&utm_campaign=new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin</link> <comments>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/#respond</comments> <dc:creator><![CDATA[Homeland Security Today]]></dc:creator> <pubDate>Tue, 21 Jan 2020 14:41:41 +0000</pubDate> <category><![CDATA[Cybersecurity]]></category> <category><![CDATA[chameleon attack]]></category> <category><![CDATA[cyber attack]]></category> <category><![CDATA[social media content]]></category> <category><![CDATA[social network scam]]></category> <guid isPermaLink="false">https://www.hstoday.us/?p=101458</guid> <description><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/" title="New ‘Chameleon’ Attack Can Secretly Modify Content on Facebook, Twitter or LinkedIn" rel="nofollow"></a><p>That video or picture you “liked” on social media of a cute dog, your favorite team or political candidate can actually be altered in a cyberattack to something completely different, detrimental and potentially criminal.</p> The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/">New ‘Chameleon’ Attack Can Secretly Modify Content on Facebook, Twitter or LinkedIn</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></description> <content:encoded><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/" title="New ‘Chameleon’ Attack Can Secretly Modify Content on Facebook, Twitter or LinkedIn" rel="nofollow"></a><p>That video or picture you “liked” on social media of a cute dog, your favorite team or political candidate can actually be altered in a cyberattack to something completely different, detrimental and potentially criminal, according to cybersecurity researchers at Ben-Gurion University of the Negev (BGU).</p> <p>The researchers looked at seven online platforms and identified similar serious weaknesses in the management of the posting systems of Facebook, Twitter and LinkedIn. Twitter does not permit changes to posts and, normally, Facebook and LinkedIn indicate a post has been edited. But this new attack overrides that.</p> <p>“Imagine watching and ‘liking’ a cute kitty video in your Facebook feed and a day later a friend calls to find out why you ‘liked’ a video of an ISIS execution,” says Dr. Rami Puzis, a researcher in the BGU Department of Software and Information Systems Engineering.</p> <p>“You log back on and find that indeed there’s a ‘like’ there. The repercussions from indicating support by liking something you would never do (Biden vs. Trump, Yankees vs. Red Sox, ISIS vs. USA) from employers, friends, family, or government enforcement unaware of this social media scam can wreak havoc in just minutes.”</p> <p>In this new study, published on <a href="http://click.agilitypr.delivery/wf/click?upn=8geNIULH4f5ciDC4-2FGdX9f-2ByfjIOWjSJMC-2FhcV7g0aZd3ZFxt9T4-2Felf26caCNED_r62vgzZxVHnejmtTwjLE7Zi52WteOQJO4UnbJNcZyFLvspJRvd0i1XEet1FsfdpEF40mqO4dzpm-2FcK48Egg7sPsvZ7jN1-2BoGSZMry2fp8LuMaQSoZBC-2F2O6ZTxb303zg-2FZlgYCPQnnimF8w2rsK8nt8GQ8djIprW2MZHWyMqZk4TxFXOpxOjmjmAPqHSWLgr5qg6dH0wggkMIhHtuY403F0ZRIpUdbKJXVc14qd4XMtavYYZNbnlZJqnQ0YgwYFjqt1bargfKJV-2FL06pDqG2Ox44Fhz2bwv7qheG-2FAyfdwrx43I9E8LzUa9B4MrJoLVBFyq6KjadnK-2BOyJR4jHF9eRzYuUjNpq2acoB3ljr3b0y9b0qtxf5ZnrUdEE1NYYTYuVHchu1SPQB2zwhveL4oiDG6oaDxrOoOl72x-2BhOSwWW92EwfP60Rq6sUxkJtOEbH">arXiv.org</a>, the researchers explain how they penetrated individual profiles and groups in several experiments and how the Online Social Network (OSN) attack, dubbed “Chameleon,”<em> </em>can be executed. The attack involves maliciously changing the way content is displayed publicly without any indication whatsoever that it was changed until you log back on and see. The post still retains the same likes and comments.</p> <p>“Adversaries can misuse Chameleon posts to launch multiple types of social network scams. First and foremost, social network Chameleons can be used for shaming or incrimination, as well as to facilitate the creation and management of fake profiles in social networks,” Dr. Puzis says.</p> <p>“They can also be used to evade censorship and monitoring, in which a disguised post reveals its true self after being approved by a moderator. Chameleon posts can also be used to unfairly collect social capital (posts, likes, links, etc.) by first disguising itself as popular content and then revealing its true self and retaining the collected interactions.”</p> <p>Facebook and LinkedIn partially mitigate the problem of modifications made to posts after their publication by displaying an indication that a post was edited. Other OSNs, such as Twitter or Instagram, do not allow published posts to be edited. Nevertheless, the major OSNs (Facebook, Twitter and LinkedIn) allow publishing redirect links, and they support link preview updates. This allows for changing the way a post is displayed without any indication that the target content of the URLs has been changed.</p> <p>In Chameleon, first the attacker collects information about the victim, an individual. The attacker creates Chameleon posts or profiles that contain the redirect links and attracts the victim’s attention to the Chameleon posts and profiles, in a manner similar to phishing attacks. The Chameleon content builds trust within the OSN, collects social capital and interacts with the victims. This phase is very important for the success of targeted and untargeted Chameleon attacks. It is similar to a general cloaking attack on the Web, but the trust of users in the OSN lowers the attack barrier.</p> <p>BGU researchers have notified LinkedIn, Twitter and Facebook about the identified misuse. Facebook and Twitter run open bug-bounty programs, which often pay significant sums for disclosing vulnerabilities with the purpose of bettering their systems and eliminating system bugs and malfunctions. LinkedIn has a closed team of white-hat hackers, but also accepts reports from outsiders without paying bounties.</p> <p>Despite this significant issue, with wide-ranging consequences in a well-targeted attack, the responses from all three social networks are concerning, as far as protecting billions of platform users worldwide.</p> <p>“Facebook responded that the reported issue ‘appears to describe a phishing attack against Facebook users and infrastructure’ and that ‘such issues do not qualify under our bug bounty program.’</p> <p>Twitter acknowledged the problem and stated in an email, “This behavior has been reported to us previously. While it may not be ideal, at this time, we do not believe this poses more of a risk than the ability to tweet a URL of any kind since the content of any web page may also change without warning.” Twitter relies on URL blacklisting implemented within their URL shortener to identify potentially harmful links and “warn users if they are navigating to a known malicious URL.”</p> <p>The LinkedIn support team were willing to investigate this issue. After receiving further requested details they started their investigation on Dec 14, 2019. “We are waiting for updates any day now,” Dr. Puzis says.</p> <p>To mitigate these issues, the BGU team recommends practitioners and researchers immediately identify potential Chameleon profiles throughout the OSNs, as well as develop and incorporate redirect reputation mechanisms into machine learning methods for identifying social network misuse. They should also include the Chameleon attack in security awareness programs alongside phishing scams and related scams.</p> <p>“On social media today, people make judgments in seconds, so this is an issue that requires solving, especially before the upcoming U.S. election,” says Dr. Puzis.</p> <p>The BGU researchers will present the Chameleon attack paper at The Web Conference in Taipei, Taiwan on April 20-24. The researchers from the Department of Software and Information Systems Engineering who also participated in this study are: Aviad Elyashar, Sagi Uziel and Abigail Paradise.</p> <p> </p>The post <a href="https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/">New ‘Chameleon’ Attack Can Secretly Modify Content on Facebook, Twitter or LinkedIn</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></content:encoded> <wfw:commentRss>https://www.hstoday.us/subject-matter-areas/cybersecurity/new-chameleon-attack-can-secretly-modify-content-on-facebook-twitter-or-linkedin/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> <item> <title>New UN and INTERPOL Counterterrorism Handbook Helps Investigators Use Online Data</title> <link>https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/?utm_source=rss&utm_medium=rss&utm_campaign=new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data</link> <comments>https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/#respond</comments> <dc:creator><![CDATA[Homeland Security Today]]></dc:creator> <pubDate>Tue, 16 Jul 2019 09:32:29 +0000</pubDate> <category><![CDATA[Counterterrorism]]></category> <category><![CDATA[Education and Training]]></category> <category><![CDATA[Global]]></category> <category><![CDATA[Law Enforcement and Public Safety]]></category> <category><![CDATA[Surveillance, Protection & Detection]]></category> <category><![CDATA[counterterrorism best practice]]></category> <category><![CDATA[foreign terrorist fighters]]></category> <category><![CDATA[online propaganda]]></category> <category><![CDATA[online radicalization]]></category> <category><![CDATA[online terrorist content]]></category> <category><![CDATA[social media content]]></category> <guid isPermaLink="false">https://www.hstoday.us/?p=95200</guid> <description><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/" title="New UN and INTERPOL Counterterrorism Handbook Helps Investigators Use Online Data" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2019/03/ransomware-2321110_1920-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>The United Nations Counter-Terrorism Centre (UNCCT) and INTERPOL have jointly produced a handbook to help investigators collect, analyze and share information found online, particularly on social media platforms.</p> The post <a href="https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/">New UN and INTERPOL Counterterrorism Handbook Helps Investigators Use Online Data</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></description> <content:encoded><![CDATA[<a href="https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/" title="New UN and INTERPOL Counterterrorism Handbook Helps Investigators Use Online Data" rel="nofollow"><img width="150" height="150" src="https://www.hstoday.us/wp-content/uploads/2019/03/ransomware-2321110_1920-150x150.jpg" class="webfeedsFeaturedVisual wp-post-image" alt="" style="float: left; margin-right: 5px;" link_thumbnail="1" decoding="async" /></a><p>The United Nations Counter-Terrorism Centre (UNCCT) and INTERPOL have jointly produced a handbook to help investigators collect, analyze and share information found online, particularly on social media platforms.</p> <p>Foreign terrorist fighters use the internet and social media for a diverse range of terrorist activities: from recruitment and radicalization to planning and funding. Given the immediacy and global reach of online terrorist activities, it is critical for law enforcement officers to understand how best to use the internet to generate online investigative leads and collect and preserve electronic records – often across international borders – in order to contribute to successful prosecutions.</p> <p>The handbook, entitled “Using the Internet and Social Media for Counter-Terrorism Investigations” shares good practices as well as offering a comprehensive list of practical online tools.</p> <p>It provides insight into how terrorists have adapted the way they use the internet and social media and continue to be active online; shares good practices in conducting an online counter-terrorism investigation; and explains how to request the preservation and collection of electronic evidence, including from service providers.</p> <p>To receive a copy of the handbook, law enforcement officers should contact the INTERPOL National Central Bureau in their country.</p> <p>The handbook forms part of a wider project on preventing and combating the foreign terrorist fighter phenomenon in the Middle East and North Africa, Southeast Asia and South Asia regions and it complements a series of training workshops delivered by INTERPOL and UNCCT in those regions between July 2018 and February 2019.</p> <p>As well as building on presentations and discussions held during the regional workshops, the handbook draws on the knowledge and network of the United Nations Global Counter-Terrorism Compact Task Force, including the United Nations Office on Drugs and Crime, the Counter-Terrorism Committee Executive Directorate and the International Association of Prosecutors.</p> <p>This project was completed with contributions from the Governments of Japan, Saudi Arabia and the United Arab Emirates.</p> <p><a href="https://www.interpol.int/News-and-Events/News/2019/INTERPOL-and-UN-publish-joint-handbook-for-online-counter-terrorism-investigations">Read more at INTERPOL</a></p>The post <a href="https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/">New UN and INTERPOL Counterterrorism Handbook Helps Investigators Use Online Data</a> appeared first on <a href="https://www.hstoday.us">HSToday</a>.]]></content:encoded> <wfw:commentRss>https://www.hstoday.us/subject-matter-areas/counterterrorism/new-un-and-interpol-counterterrorism-handbook-helps-investigators-use-online-data/feed/</wfw:commentRss> <slash:comments>0</slash:comments> </item> </channel> </rss>