close
close

Towards network-neutral artificial intelligence and open dialogue

Unlike the contribution model that has underpinned the development of web content since the late 20th century, conversational agents in AI language models do not allow users to directly publish content, products or innovations. AI software does not allow individuals to publicly comment on query results, criticize them or, for example, rate the quality of a commercial product recommended by the chatbot.

The problem lies in the procedure for training language models, as currently the selection of seed data used by conversational agents is controlled by private companies with commercial purposes. A truly participatory language model has not yet been invented and we are still far from achieving it. Not only does the political will to conduct research in this direction appear to be non-existent, even in the open source community, but artificial intelligence is currently being used to limit the freedom of expression of Internet users. The policy of censorship of language models by the companies that distribute them is a striking example.

A conversationalist adhering to the principle of net neutrality should present in detail and quantitatively the various arguments in a given debate on each question, without taking a definitive position and certainly without suppressing viewpoints, whether they are minority or conspiratorial. This impartiality should also apply to commercial product guides, for fundamental reasons related to competition law, freedom of establishment and, more broadly, the quality of advice provided by chatbots. Thus, the qualitative quality of AI models is closely related to their ability to transparently incorporate user input and comments.

Pre-selection of dogmatic truth through censorship of commercial language models appears to be fundamentally counterproductive in terms of qualitative performance criteria, respect for fundamental rights, innovation and the use of the participatory method that was one of the main advantages of the old Internet, including among companies such as Google, which based its development on advertising revenues. On the one hand, it seems that institutional actors, associations and civil society should intervene to implement and promote AI software that respects fundamental freedoms and net neutrality to enhance the quality of new, open and transparent language models. On the other hand, when it comes to innovation and research, political will is also needed to integrate ways of public and direct participation in language model training algorithms.

This means that the use of AI technologies in courts or parliaments would require a form of real-time algorithmic self-training, based on each new contribution modifying the pre-training dataset. Just as Web 2.0 tried (and failed) to encourage the publication of contributions from everyone, or as dynamic databases personalized old websites according to each user’s usage, so we expect a dynamic AI 2.0 that integrates new contributions based on their qualitative assessment by publishing the contributions of each user. This line of research leads us to wonder how qualitative assessments of content or commercial products could emerge from fundamentally statistical and quantitative language models. Are conversational agents not at risk of recommending on average the most frequently purchased products or favoring the ideological arguments most prevalent among users?

Surprisingly, the answer to questions about the qualitative performance of generative intelligence software and its relationship to contributing creativity, in other words to collective intelligence, is already determined by the political and psychological choices of AI designers. Censorship mechanisms, cultural stereotypes contained in training data, and economic biases of software development companies constitute significant obstacles to the qualitative effectiveness of language models.