Interview with Teodora Petkova

In April of this year, while teaching a course in terminology management, with my students we started talking about knowledge management, taxonomies, ontologies, knowledge graphs and SEO. This meant it was time for me to have a talk with Teodora Petkova. I had stumbled upon her articles and her book about writing for the Semantic Web a couple of years ago. Teodora is a philologist fascinated by the metamorphoses of text on the Web, curious about the ways the Semantic Web unfolds. Currently, she is a PhD student at the Sofia University, exploring how content writing is changing, changing us and the way we think, write and live. Teodora writes for the Ontotext blog and is a co-founder of Web Scriptorium.

The title of your book is Brave New Text, which immediately reminded me of “Brave New World”, the novel by Aldous Huxley published in 1932 about a genetically-engineered society where emotions are banned and where one of the characters says “everybody belongs to everyone else”. Do you share the opinion that words, or more generally language, can be manipulated like Huxley was writing about or do you see positive elements? 

Thank you for this question. The title is an intertextual play with “Brave New World” but with a twist. I wanted to convey that the brave new world can never be a total negation or a sanitized version of anything. It is neither the result of technological optimism nor the result. It is something we have never envisioned, something that needs to be weaved. By us. Together.

Yes, language can be manipulated, just as anything else. We are going very deep in a rabbit hole here, yet I wanted to tell you that this reminds me of Orwell’s essay “New Words”. When we speak of manipulation we should also have in mind this: “Everyone who thinks at all has noticed that our language is practically useless for describing anything that goes on inside the brain.” That is an extreme example but to get back to “everybody belongs to everyone else”, it also has to do with ambiguity.

Language’s ambiguity is a double-edged sword. There is always positive and negative and no one utopian future. It is up to our own integrity, together with the ability to understand the other, and not hold so tightly to assumptions where the way out of manipulation lies.  

Speaking of ambiguity, a movie scene stuck so hard in my mind reminding me of the impossibility to “codify” language and freeze it. In the movie (probably Elysium) Matt Damon managed to fool a machine into thinking that he is surrendering. How did he do that? Language, emotions, the invisible (or, maybe, yet uncracked) code we all have is a vortex of meanings that we will hardly be able to embed in a system. Not because we won’t have the code, but because this is unembeddable.

What is embeddable though is the framework (not a finite one, yet with enough traditions) of dialogue. It’s only through the multiple iterative exchange of different views and combinations of truths that any meaning can be conceived. As meaning is always outside, dancing with every participant in a conversation for a while and then changing places unexpectedly. It’s always somewhere within the organics of silence, in-between perspectives.  (See Social presence as a living by intertextuality for more on this.)

When did your interest in the semantic web come to be? And How did it develop? 

From David Amerland’s book: I got curious why one time he used a semantic web without capitalization and why he once used it capitalized. 🙂 So I started running after this curiosity “carrot”. 🙂

It developed through conversations with amazing people like Aaron Bradley, Kingsley Idehen, Bernard Vatant, Andrea Volpini, Amit Sheth. They fed my curiosity thus teaching me, the way a machine learning program is fed with input. 🙂 And then, I got this poetic output, seeing everything through the prism of the evolving code (and pleasure) of text and its capacity to interweave people, things and thoughts.

When reading about the semantic web and semantic technologies, knowledge graphs are always mentioned as one of the main elements. Why, in your opinion, when I look at a representation of a knowledge graph, as a terminologist I see a good old fashioned ontology? Is it just a professional bias on my part? 

Because “Natura non saltat”. 🙂 Knowledge graphs are the natural way ahead on our quest for “knowing more” and for codifying this knowledge for the future generations. They are based on years of evolution of the ways we manage knowledge. Only that as semantic technologies are maturing, these ways are becoming more sophisticated.

Let me quote Atanas Kiryakov here (disclaimer: I work for Ontotext + I am his life-long, forever, fan 🙂 even before I started writing for Ontotext’s blog):

“Knowledge graphs are the most advanced knowledge representation paradigm. With over 25 experience in AI, I can tell humanity never had an instrument like this. 

They combine the best we had with taxonomies (380 BC), semantic networks (1956), network model databases (1971), knowledge bases (1980s), ontologies (early 1990s), semantic dictionaries (late 1990s) and linked data (2000s). And all this at a scale, which reveals new qualities.” [Source: The Knowledge Graph Cookbook]

Do you think that semantic technologies are going to change the way we write and publish content? And do you think that, in 2020, content still varies according to the communication channel? 

They are already changing it. Look at Wordlift and the way the approach content and content models, or the BBC and the way they tackled the TV content in the case of 5 Lessons from Beyond the Polar Bear.

Regarding the “content varying according to the communication channel” – yes, that is normal. Every channel has its specifics. It is one thing to address a topic in a tweet and another to tackle it in a newsletter. I would also go in the direction of thinking about communication flows rather than channels. See Margherita Corniani, Digital Marketing Communication:

“Digital communication flows are diffused at costs that are getting lower and lower, but it asks specialized and deep competences to communication managers. The ease in the flowing process granted by digital technologies is also the main negative aspect of digital communication. It is impossible to control digital flows in all their effect and contacts and this limit opens the door to competitor actions and to “rumors.”

Which evolutionary path is SEO going to take? Do you think we are or will become “SEO slaves”?

No! Don’t panic. I don’t think we will become “SEO slaves”. SEO is evolving. Take, for example, Wordlift and the work they are doing in the field of bridging content models to semantic web technology and SEO. This is a beautiful way forward. Not only transactional queries will be taken care of, but also the informational and navigational ones, too. And that with the added benefit of having so much more valuable content on the Web.
I think SEO success, in the context of an increasingly transparent Web, will be a byproduct of clear and concise thinking, adequate planned and unplanned marketing communications. And, okay, a bit of (or a lot of) data. 🙂 As I learnt by a summary of the book Understanding SEO, in Heinz Wittenbrick’s blog, SEO is: Success in search is not the result of an isolated activity called SEO, but is based on the work of an entire organization or company.

You were a philologist in your previous life and you use references in your articles to Classical Culture. I am thinking, for example, of your article titled “The power of URI and Why Odysseus called himself Nobody”.
This made me think of the Netflix series “DEVS”, where the character named Stewart says something like, “You have to know the past to predict the future” and his boss, Forest, replies saying that the system’s name should be read the Latin way. How does language (especially, dead languages) interfere with these visions?

First, I am still a philologist. Word and concepts lover. Dead languages do not interfere with these visions, they rather are the root of all the things unfolding in the domains of language, logic and the world. They can serve us both as a tele- and miscro-scope to peek into the future and past worlds seeing how words evolve through us coming together, using them and advancing meaning and intelligence through the ages with one of the most sophisticated tool we have for conceiving of the galaxies around us – language and our social instinct.

To wrap up with what we started, The Brave New Text. The word text comes from the Latin verb “texo”, meaning “weave”. And let me remind you (or surprise you) that the beginning of the analytical engine (the ancestor of computers) was … the Jacquard loom. The Jacquard machine is a device fitted to a power loom that simplifies the process of manufacturing textiles. The loom was controlled by a “chain of cards”; a number of punched cards laced together into a continuous sequence. It is that Jacquard’s invention had a deep influence on Charles Babbage – a mathematician, philosopher, inventor and mechanical engineer, who originated the concept of a digital programmable computer, the 4-inch internet galaxy (yes, I am reading Castells) we carry in our pocket today.


Sign up for my monthly
#SmartReads on the Translation Industry

    Your email is safe with me and I will never share it with anyone.