Blog

Lumina Datamatics
Lumina Datamatics is a trusted partner in providing Content Services, eCommerce Support Services, and Technology Solutions to several global companies in the Publishing and eCommerce industries worldwide.

    Subscribe



    AI AND CONTENT MODERATION: HOW THE REGULARTORY LANDSCAPE IS SHAPING UP
    June 15, 2020

    The biggest challenge facing technology today isn’t adoption, it’s regulation. Innovation is moving at such a rapid pace that the legal and regulatory implications are lagging behind what’s possible.

    Artificial Intelligence (AI) is one particularly tricky area for regulators to reach consensus on; as is content moderation.

    With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy and dating sites – it’s clear that more needs to be done to ensure the safety of users.

    But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.

    AI + Moderation: A Perfect Pairing

    Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re talking about upholding YouTube censorship or netting catfish on Tinder.

    Given the vast amount of content that’s uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action needs to be taken, it’s unsustainable to rely on human moderation alone.

    Enter AI – but not necessarily as most people will know it (we’re still a long way from sapient androids). Mainly, where content moderation is concerned, the use of AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.

    AI not only offers the scale, capacity, and speed needed to moderate huge volumes of content; it also limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.

    Understanding The Wider Issue

    So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.

    What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, items that are illegal in one nation aren’t always illegal in another. A lot needs to be taken into account.

    But what about the role of AI? What objections could there be for software that’s able to provide huge economies of scale, operational efficiency, and protect people from harm – both users and moderators?

    The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary in a similar way – country-to-country – to efforts designed to regulate content moderation.

    To understand this better, we need to look at ways in which the different nations are addressing the challenges of digitalisation – and what their attitudes are towards both online moderation and AI.

    The EU: Apply Pressure To Platforms

    As an individual region, the EU arguably is leading the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.

    Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high profile tech companies – including Google, Facebook, Twitter, Microsoft and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.

    These reports document the policies and processes these organisations have undertaken to prevent the spread of harmful content and fake news online.

    While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up to the initiative.

    AI In The EU

    In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.

    Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalised in Europe.

    However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.

    Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach’ as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy’ – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
    criminal justice’.

    While these developments are promising, in that they demonstrate the depth and importance which the EU is tackling these issues, problems will no doubt arise when adoption and enforcement by 27 different member states are required.

    Britain Online Post-Brexit

    One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.

    An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability which moves beyond self-regulation and the need to establish a new independent regulator.

    Included in this is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.

    To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.

    Expansion Elsewhere

    Numerous other countries – from Canada to Australia – have expressed a formal commitment to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.

    Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.

    They are defined as:

    • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
    • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
    • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
    • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
    • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

    While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad of cultural and legal differences, the tech sector faces, international standardisation remains a massive challenge.

    The Right Approach – Hurt By Overt Complexity

    All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.

    Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as content may be created in one country and viewed in another.

    This is why the use of AI machine learning is so critical in moderation – algorithms can be trained to do all of the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.

    As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.

    The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.

    While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear to see why discussions around regulation persist at great length.

    But, as is the case with most technologies themselves, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.

    New laws and legislations can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. Get in touch with us!

    Originally posted on besedo.com

    0 Comments

    CONTACT US
    close slider

      CONTACT US






      To contact us directly or to inquire about our services, please email marketing@luminad.com
      Please prove you are human by selecting the truck.