Product Pioneers

Sebastian - History, future of generative AI, focused on ChatGPT and its societal impact

Episode Summary

In this episode of the Product Pioneers Podcast, Sebastian, a philosopher and senior lecturer for Science, Technology, and Society program at CODE University, delved deep into the world of artificial intelligence (AI), focusing on ChatGPT and generative AI. Sebastian detailed the societal and individual impacts of ChatGPT, particularly its potential in education, referencing his ongoing seminar that encourages students to utilize ChatGPT for essay writing. Sebastian provided an overview of AI's history, highlighting the significant role natural language processing (NLP) has played due to the language being fundamental to human interaction. He explained how generative AI, particularly in NLP, brings us closer to machines that can understand and generate human-like language. Discussing AI's cultural significance, Sebastian pointed out that fascination with intelligent machines isn't a new phenomenon and predates the invention of the computer. He spoke of the human propensity to project human-like understanding or consciousness onto AI, and he discussed the risks and ethical challenges that come with this, such as the potential for misuse or the perpetuation of biases. Sebastian touched on how the rise of AI and automation can significantly impact employment, noting that while some jobs may disappear, new jobs that don't yet exist will be created. He emphasized that while AI like ChatGPT can automate tasks, freeing humans for more creative work, constant learning, and adaptation will be key to navigating these changes. The podcast explored the legal and ethical implications of AI, with Sebastian advocating for a balance between privacy, innovation, and the equitable distribution of wealth generated by AI. The conversation moved on to the impact of AI on education, discussing the need for a shift in the skills we teach, from rote learning to digital skills, including understanding AI tools and their potential societal impact. The importance of transparency in AI development was also highlighted. In academia, Sebastian noted how ChatGPT could be used to simplify academic language and draft texts, while in the working world, it could serve as a virtual assistant, manage customer service, or assist creatives. Discussing the future of AI, Sebastian emphasized the role of generative AI as an augmentation tool, rather than a replacement for human effort. He also speculated on more specialized applications of AI and the need for robust ethical guidelines to govern AI usage. Concluding the podcast, Sebastian stressed the importance of focusing not just on what AI can do, but on the human intelligence behind these technological advancements. Rather than merely learning from machines, he encouraged listeners to learn from the wealth of human knowledge, creativity, and diverse experiences.

Episode Notes

The episode of the Product Pioneers Podcast featuring Sebastian, a philosopher and senior lecturer of the Science, Technology, Society program at CODE University, provides an insightful exploration of artificial intelligence (AI), with particular emphasis on ChatGPT and generative AI. Sebastian delves into their societal and individual impacts, sharing a unique educational experiment involving ChatGPT. He also presents a brief history of AI, touching upon its origin, decline, and resurgence due to advancements in computing power and the internet.

Natural language processing (NLP), he asserts, is integral to AI because language is a cornerstone of human interaction. This leads to a discussion on generative AI, bringing us closer to machines understanding and generating human-like language. The workings of ChatGPT, a large language model trained on extensive amounts of human language data, are also discussed.

Sebastian reflects on the cultural significance of AI and human fascination with intelligent machines. Despite the seeming humanlike understanding or consciousness of large language models like ChatGPT-4, he stresses that they merely follow programmed patterns and statistical probabilities based on their training data.

Ethical and societal challenges of AI are explored, particularly the risk of misuse and the perpetuation of biases inherent in the training data. He emphasizes the need to remember the limitations of these models, insisting they are tools designed to assist us, not autonomous entities.

The philosopher further explores the effects of AI interactions on our social lives and views on reality. He notes the risk of manipulation of user behavior, pointing to instances of surveillance capitalism and social media algorithms that influence user preferences. He also discusses the implications of integrating ChatGPT with personal devices and software, raising concerns about control loss and potential manipulation.

Sebastian acknowledges the impact of AI on employment, but anticipates the creation of new jobs requiring new skills. He expects generative AI to enhance human creativity and emphasizes the need for continual learning and adaptation.

The legal and ethical implications of AI are not overlooked, and the importance of striking a balance between individual privacy and technological development is highlighted. He advocates for the equitable distribution of wealth generated by technological advancements.

He stresses the need for everyone to understand AI systems and their potential societal impact. He also sees a potential role for generative AI in simplifying academic language and assisting in academic writing and various aspects of working life.

Finally, Sebastian outlines potential future roles for ChatGPT and generative AI, focusing on augmentation, specialization, cohesive interaction with other AI forms, and ethical considerations around biases, misinformation, and manipulation. He notes the ongoing debate about AI in the arts and creative industries, insisting that the source of innovation and originality will remain innately human.

As a key takeaway, Sebastian urges shifting focus from marveling at AI capabilities to appreciating human intelligence, as humans are the architects of these amazing technological advancements. Instead of just learning from machines, he encourages learning from diverse human experiences, knowledge, and creativity.

Episode Transcription

Hello Sebastian. Welcome to Product Pioneers Podcast. Today is our pleasure to have you with us. We discussed before ChatGPT-4 announced as a frontier of our generative AI movement. Today, we will discuss together a several topic history of ai, how to understand ChatGPT, what the real impact of ChatGPT future changes by ChatGPT, and also how to work with ChatGPT to bring positive value to work and lives as we are having.

Could you mind introduce a bit more about yourself and also your journey with ChatGPT for us?

Thank you so much Huyen, for having me. Really excited to be here as a philosopher in a product podcast. I think that is unusual and I hope I won't disappoint the audience for that. And also thank you already for sharing the agenda.

 Now I'm already committed to say it a lot about this because it's quite a packed again, I'm looking forward to that and I hope that we will have an exciting conversation. I can say a few words about me and I can also say a few words about my interest in ChatGPT or in generative AI in general and what I'm currently doing at CODE University.

So my personal background is that I'm a philosopher. I studied philosophy linguistics and literature. Did my PhD on a topic within quantified model logic, which says is an application of very complicated models with and try to interpret them from a philosophical perspective.

I always had a natural interest in computer science. And so at a certain point I found CODE and CODE found me, and we thought this might be a match because what I'm currently doing at Code as I'm a senior lecturer for the philosophy of digital technologies within our STS program where I mostly work with those students on the societal and impact of various technologies.

And  my goal is to make our students aware of set whatever they do in their daily life, and also  in their work life as a certain influence, both on how we live together, but also how we understand ourselves as human beings. And yeah, as you already described it in your intro, I think ChatGPT has a lot of consequences for how we live together, but also how we understand ourselves.

And also universities currently discuss a lot how they have to understand themselves and whether and how we need to rethink education. I think that is something we at code are quite good at anyway. Given our very specific approach to education And that that is  where, where my natural interest in ChatGPT comes from.

Also in artificial intelligence in general, I also published a book on artificial intelligence last year advertisement section closed. And currently this semester I do something a little bit unusual giving the whole debate around how how generative AI affects university life and university education.

I offered a seminar within Code University with the wonderful title, how to pass your module by using ChatGPT. In other words, I encourage our students to cheat in the essay writing by using ChatGPT and want them at the same time to reflect on the learning journey.

And because I want to find out how good those tools actually are, what we could do with that. And also where possibly a few limitations of such a tool are. Especially given a huge impact of generative AI for higher education that is currently in debate. I'm hoping that I could contribute a little bit to this debate also with this podcast, and shed light on various issues with regards to artificial intelligence.

With a special focus on natural language processing and and large language models like ChatGPT-3 and ChatGPT-4 on which ChatGPT is based. I think we'll come to that in a minute.

That is wonderful. And yeah, it's such a pleasure for us to be here talking about ChatGPT. It's not only the AI tool out there, basically on the bleeding edge of generative ai, which makes us one step closer to agi. As we have already discussed in the class, right?

However, when we talk back about history of ai, could you shed a light more for us into how it had been developed, like what tools are existing out there and what has been the improvements so far.

So history of ai. So as a philosopher I'm also interested in  a history of concepts and things that is crucial to understand, where's this whole concept from artificial intelligence comes from, because that also gives a little bit direction and a hidden in what we should make out of this notion or what should make out of this complicated terms.

So artificial intelligence was first mentioned 19 50, 19 55 by John McCarthy and a few other computer pioneers, , who drafted a research proposal for the famous Darthmouth summer workshop that took place in 1956 and in their proposal for this workshop. So they wanted to acquire some funding for that.

They first mentioned this concept of artificial intelligence and in this proposal McCarthy and others wrote to propose a two month, 10 month study of artificial intelligence be carried out during the summer of 1956 at Darthmouth College in Hanover, New Hampshire.

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence, so obviously talk about human intelligence here can in principle, be so precisely described that a machine can be made to simulate it. So, in other words, so if you follow their definition, it's about the simulation of some aspects of human intelligence, including learning and many other features and and using machines to simulate those aspects.

So that is where this term come from. And after that, this term unfolded  a lot of speculation, a lot of myth started a lot of debate and sparked a lot of interest also outside of computer science. And it's really hard to summarize that. So severe interest already, after, after proposal was made, which by the way, was so early said, it more or less goes hand in hand with the discovery of the computer. I mean, the computer was invented or covered in the forties, early fifties, depending on where you wanna make a cut here. And also this whole debate in question that already is entailed in this concept of artificial intelligence was even discussed before like the famous Turing test and proposed by Alan Turing.

Already goes back to paper written in the 1950s by Alan Turing. And at which he discussed the question whether computers can think and So you can already see that a lot of those questions and that we currently ask ourselves and are currently sparked again by chatGPT already have a long history in philosophy.

Especially in the 1960s and 1970s, there were a lot of people writing and commenting on it, and were making assumptions about the human brain. And some people claimed the human brain actually exactly works like a computer or say human brain is nothing else than  a computer or vice versa. A computer is nothing else than exact model of the human brain. I think we still can learn a lot from this discussions. Later on yeah, this is so like both the concept artificial intelligence and, and also some technologies that are , implied by this concepts have been a little bit forgotten.

There were several now from a historical perspective, people collect AI winter that say where people realized that the potential of computers were a little bit overestimated and they didn't fulfill the promises and hope that a lot of people associated with them.

And it only , came back, it's hard to say probably 10 years ago, 15 years ago, that there was something like a new, I dunno, AI summer, if you want to use that term, that ai became a center interest of a lot of people within the field of digital technology.

And this mostly has to do with better computing power also with the internet and the idea of  connecting more and more computers together and a higher internet speed. So you you could transfer data much quicker than before. But at least from a theoretical perspective, so were not any major changes to what old concepts of what AI entailed, even though already said is a little bit controversial what you even mean by concept like ai.

But at least the mathematics behind what people call a neural network nowadays or already called a neural network back then is already quite old. The concepts have been theoretically proven and demonstrated already. You can see starting in the 1960s and 1970s. Depending on whom you ask and what authors you deem to be relevant and especially, and now I try to build a bridge towards Natural Language processing was always at the core interest of the AI community both in computer science, but also within the philosophical community who tried to explain what's going on within computer science. And there are multiple reasons for that. Why natural language processing,  was always so interesting. One major reason I assume is that language is at the core of what we do as human beings.

And human beings cannot interact without language in whatever form. So you can also think of gesturing or  mimicking as a form of language. As human beings somehow cannot live without it. At least they cannot live together without any sort of language. And on the other hand at early computers didn't have anything like graphical user interfaces and all the nice stuff that we have nowadays. But if you wanted to use a computer, you needed to know how to use the terminal and you needed to know how to use a command line within the terminal. And was that, you needed to use type and your language, and that's all what a computer could do or computer program could do in the end give you feedback language towards the screen.

So I think that is also why within  the early applications of computers language on everything related to language plays such  a crucial role. If we speak about generative ai where we simply put artificial intelligence that generate things.

I mean, nowadays there are different possibilities due to say various new forms of technology. Like you could also think of picture generating or audio generating ai. There was a lot of examples with that but still thinks the natural language processing, meaning the idea to take human language and doing calculations based of it with a computer and letting a computer and I'm using inverted commerce here.

Understand human language is still one of the core disciplines of artificial intelligence. You could but there seems to be a natural overlap. And recently with the release of ChatGPT in November, 2022 there also seems to be a wider and more general interest in natural language processing tools.

Even though from a theoretical perspective, ChatGPT you could say it is not new. It's just, lets say another application of what OpenAI the company behind ChatGPT had been building on for many years before. And I mean ChatGPT is just an application of the called GPT-3 that that was released in July, 2020. So there used to be even earlier versions of that. And I mean, currently we are at version GPT4 that was just released last month.

If I may add to that, just from a cultural perspective, I think it's interesting you could even say those dreams and ideas say were way older than the invention of the computer. So the idea that humanity could build machines that somehow outsmart us within human history I think you just need to go back to 17th, 18th century.

Like there were a lot of writers and artists playing with this idea. There were people building automatons. There were people like for example, live nets, who thought that's the whole world could be explained by computation and who also invented back then the binary number system on which computation is built nowadays. This idea that humanity creates machines that at a certain point, will outsmart us, if you want to use that word. It's way older, and I don't know what fascinates human beings about that. But that's also topic for its own.

But I just wanted to add to that. That's at least fascinating that you could find a lot of those examples in the history, in the arts, et cetera, way before human beings ever thought of building a computer.

That's fascinating. I think Sebastian, I really feel like we should have at least one episode to really deep dive on the hypothesis around that.

Like one of my hypothesis might be because it's our human curiosity, also the drive innovation, which is might be the motivation behind all the human civilization development and always striving for more, striving for perfection, striving to be better. So this might be the starting of the dream, but it's just a theory, I really want to know more about the motivation behind at one point.

Yeah. Talking about ChatGPT as a large language model. So I did have a quick conversation with Frank and actually in AI community they do have more approaches regarding to how to  train a large language model like symbolic ai and also many other ways not only like a large language model, like a probability model, like ChatGPT. However, because you have a class on ChatGPT, could you mind shed a light  for us on what is really ChatGPT and how it created?

I could try but nevertheless, I think I try to simplify things a little bit.

There were a lot of concepts that you raised that might be worth to talk about, like symbolic ai, which would be the opposite of using like a large language model or using artificial intelligence in the sense of neural networks based on machine learning.

Which is  the kind of AI technologies that we are talking here. So a large language model is in the end,  just an extreme result of what is comes out of machine learning based on what one could call like a neural network or what computer scientists call neural network.

And it's exactly, what's the name such as? It's a large model of language, in other words, you feed a lot of human language or a lot of text, or a huge corpus of written material into a machine learning program and use that to build a model of how human language works.

If you look at GPT-3 as they had a corpus size, meaning, no the number of sources they used to train the model of 499 billion tokens, so you could imagine how much material or how much language has been accumulated to train this model.

And just to clarify, a word could be a token, right? A word could be a token or even a phrase could be a token.

Exactly. It's just a phrase or sentence or it's  a poem or just a set of words or, or whatsoever  in the end it's, it's trained by that humans created somehow, somewhere could be a tweet.

Whatsoever it's, it's, it's not exactly clear what they use. At least not to me, not spontaneously. But, yeah, a set of words that human beings used somehow, somewhere. And and based on that and, and  is secret idea, behind a large language model.

You can determine probabilities into what phrases and also what words follow after one another. So if you have a sentence like I like, then it's with a certain probability, you can estimate that the next word will be ice cream and not brocolli. So it's it's a little bit more likely that people like ice cream over brocolli.

So, you could assume that by looking at how people use language and how often,  a certain word comes up after the word has been used before. So then you can do those probability calculations and then you could also, given from a certain sentence, you could also make a probability calculation.

What's the context of the conversation? So for example If, yeah, let's take the ice cream example though. I want to have a strawberry ice cream. Then you could guess that this sentence has been that yeah in an ice cream shop rather than in a fancy restaurant or at a bar. Even though people might also say the sentence here was of a certain probability, and I think set is in the end what a large language model does that to say.

It's calculating probabilities based on what words has been used before, or what phrases or what sentences and calculating based on that or giving a number, signing probability towards what word will follow next.

Yeah. Very interesting when we know that it is a probability model because it might have the chances that it doesn't contain knowledge inside.

And I mean, it might be very good at predicting what to say next. But it might or might not correlate with what it really understand, right? Like consciousness, as we often discuss the AI really have conscious or not going a little bit backward regarding to history of open ai and also their approach to training a language model using large language model approach.

So yeah, as you said before, this one is ChatGPT-4. So basically they already have other ChatGPT develop it before could you share with us what do you know and what you think about the whole revolution?

Okay. I'm not sure whether I would call it a revolution at all. I think a revolution I would classify something as completely new,  it turns the whole world upside down. Maybe there's a revolution forms in the sense of how the public reacted to it, but I don't think it's a revolution in the principle way how this technology works.

So it's just to say it's another application or it's maybe a better version that is trained with much more training data and . In this sense, it's obviously new, but I don't think it's a categorical revolution in the sense of, let's says that we have a completely new phenomenon that humanity has never seen before.

Thank you very much for that clarification. Going back to the development, if I may complete the development of open AI and ChatGPT. What is your take on that?

Very simple words. I could say that the model of language they use is larger than in the model they used before. In other words they use more training data and the calculations or like  the probability calculations within this language model are just more accurate and say what you could do with those tools in the end is more powerful, or the text will be more humanlike and they won't contains the many mistakes and I think it's just a different scaler. I like to speak about airplanes and model airplanes all the time when I think philosophically about artificial intelligence, so you could think of our language as an airplane and then you build a model of it and you try to make your model as accurate as possible. Try to mimic a lot of parts of those airplanes that you want to model. And then the newer versions of GPT are just more accurate models of how an actual airplane looks like. Maybe they are bigger, maybe say a little bit more powerful, maybe they could fly a little bit higher. But it's still a model of how human language works.

It's interesting, like from the way I see it, I feel like they try to raise a baby. So basically in the beginning, the baby study very fundamental thing and try to grab a little bit of the main topics and then it become more complicated or sophisticated over time.

So it's such a very nice metaphor when you mentioned airplanes as make it much easier for me to wrap my head around that concept.

Yeah. Also, I wouldn't speak of babies because I think babies are way more fascinating than any large language model could ever be authored. Any comment can ever be, and everybody who's ever holding a baby in their hands would know that.

Yeah, they bring much more joy. And also maybe because they have the combination of also object detection and many other complicated parts in their brain context that still our computer are quite linear in comparison to how the brain it created, at least in my humble knowledge.

Now when we talk about the large language model with all the data they feed could you think of any mistake or biases you know, or danger use cases could be possible come out of that usage?

I mean, what is often underestimated when you think about ChatGPT, or I use the baby metaphor now going back to our chat before. So, what's often underestimated like humanity looks at those chat bots or like some people within humanity don't wanna generalize the whole humanity as if it was creating something new.

And, I usually like to say it's rather like a baby looking in the mirror and don't recognize yet that it's actually themselves that they're seeing. So a large language model is obviously trained with a lot of data from human language. So by that any large language model contains all the mistakes and planners, and also, unfortunately, also discriminatory aspects that human language has.

So depending also with what data it has been trained with. And then if a chatbot would give you certain answers that are, take an extreme example than racist or otherwise discriminatory, It is just because that is how human language is, and that is in the end how a few humans unfortunately speak.

But I think it's a strange misconception to assume or to say that an AI can be biased or ai can be racist but it's rather humans who provides the data that a language model is trained with and phrases and discriminatory something we need to work on.

And we shouldn't be sort of fall for the illusion that a language model is much more than a model. So I think what I could say here and obviously the results of chatbot at its current stage. I don't think this will ever be perfect.

And I still think it also contains a lot of factual mistakes. Also given how networks are trained the output that you will generate will not often be also factually correct and say there might be a few dangerous involved here, especially if people start confused that they're chatting with a real person or they start assuming that they are talking to something real that has understanding and cognitive states or consciousness. And then they just follow what's this chat bot will say because they don't understand that they're just talking to a large language model or that they're just talking to abtract mathematics based on electronic circuits.

From that, I think there are a lot of dangers and there's probably also an incentive to use this wrong belief of human beings and to manipulate them. Cause you could obviously also make this chatbot give any recommendations for I dunno for whom to vote for in the next election. Right now if you would ask chat such a question, it would give you back as well an answer. I'm just a large language model. I don't make any political judgments. But obviously there's a certain. From people that the answer will be different and I'm sure that people will be paying whoever is releasing such a chat bot for giving such answers.

So I think there are a lot of dangers involved here and especially if people mistake ChatGPT and think it's something different than what it  actually is.

Yeah. I mean. Thank you so much for bringing it up. I have seen quite a few people kind of worship ChatGPT and treat it a little bit at godlike like, ah, it the only answer now I don't need to Google anymore. Now I don't need to use my brain anymore. It can basically create everything for me. Which is, yes, it is very powerful and it is very smart. It can crawling a lot of information. It never sleep. It can processing information in much faster pace.

But I also have discussion with other people who kind of founders or creators in the field that, for example, Henri, he also said that ChatGPT is more or less a very smart, very efficient assistant, and it's only give you what you asking for. So you are still the one who in the driver's seat, you are still the one who had intention and the one who requests information.

And it's so funny, right? Like it's so funny, I saw my friend posting on Instagram, kind of of like picture of ChatGPT. When you ask a very controversial question, it always say, oh, I'm just a large language model. I don't state my opinion, however, because I wanna make you happy in the bracket. This is what you could do, which is kind of super funny.

And I always wonder if the person who have this kind of question follow it in the end or not.

Mm-hmm. Can I at this point tell the anecdote or the story of Eliza, which my students who listen to their podcast could just skip? Because they talk about Eliza all the time so they, they can listen invent anecdote is complete, but I think this example from the history of computer senses, so telling and revealing also what's going on with misinterpretation of ChatGPT nowadays.

So Eliza was a chat bot. It was published by Joseph Weinzenbaum, who was one of the computer pioneers back then in 1966. And yeah, he was just playing around with certain technologies and he was building a natural language processing bot,  based on minimal context. And he was looking for an easy way to implement context in which conversations took place.

And then he, as an experiment, play around with making his chatbot simulate a psychotherapist. Or you could  the cliche of a psychotherapist is that just asks random questions about what does this has to do with your family, and  to whatever you say. Not saying that any form of psychotherapy if it's done seriously follows this cliche, but that  just an experiment with language for Weinzenbaum.

But at a certain point he was shocked by how people reacted to that. And were willing to project subjectivity towards his chatbot and were starting to tell their inner secrets and desires and shares this with just a command line tool that is asking a few random questions. And a lot of people were assuming, although, this shocked Weizenbaum and the most psychologists that this could substitute psychotherapy in the near future.

And simply because people projected something like subjectivity towards this chatbot. And after that, Weizenbaum was so shocked that he thought more philosophically about things and published a wonderful book that I can also recommend to everybody to read. Title, computer Power and Human Reason, which is about the idea how humans give computer power or while they're actually diminishing the self and give up their own reasoning towards the computer.

And delegate is their core human features, core human abilities and core human skills and a computer would ever be capable of such a thing. And I think that is a phenomenon. That's why I'm telling the whole story. Even though obviously Eliza was symbolic ai and it was from a contemporary perspective, just a very weak chatbot.

I would say that all our students in their first semester could program a much better chatbot. That it said this one has been released in 1966. But I think the story is still the same and that people mistake or like what humans subject subjectivity is and what humans are capable of doing and what a chatbot is not.

And set So, so say for, for the illusions that actually are not talking with some things that has consciousness or mental states and think that's an important point to make here. And that's why I like to share the story of Eliza so much because it's, it's very revealing and I can only encourage everybody to read up about it.

Yeah. It is such a mind blowing story. I have a talk with one of our cofounder Manuel the other day when he was talking about his concern when he saw his children interacting with ChatGPT now. Mm-hmm. It's like they almost lost the feeling of time. They almost lot the senses of time when they're interacting with that and he said it used to happen when people do computer game, right? Mm-hmm. Or actually not, not even not gamer, like normal user. Like we interact and we have such a closeness such a intimacy relationship, if I might say with our phone. Mm-hmm. With our digital devices. Some of my friend even claim that it's more like an extension or external part of themselves, an extension of their own avatar already.

And it's fascinating to see how people interact with the Eliza chat bot since the very old already had that kind of subjectiveness intimacy. I mean my, my own feeling is that it might come from our center loneliness. Mm-hmm. Probably, at least from my own hypothesis when we all always want to have someone to share our experiment with.

And not everyone is just only available, but the chat bot seem like all way available, very understanding, at least in the way it is  designed  right now, and very smart. But I really need to check out the book. It is fascinating to know.

If one, I can also recommend you another book and philosopher. So part of my job is recommending books over and over again. So if you wanna have a more recent book and exactly this topic, how, how to say how both smartphones, but also in the, it's the next step, also  robots and like whole idea of social change our living together, change our expectations about each other. Then I would definitely recommend you to check out Sherry Turk's book that is called Alone Together, that already has, I think, this aspects that you just mentioned of feeling lonely in the title.

So in the end, we, we try to use those devices in order to overcome over loneliness and and underestimate sets those devices. And also in the end, robots, et cetera, make us even more lonely because we do not really interact with real human beings anymore, but also our expectations towards human beings change.

And for example, she reports on on children that trust their phone more than their parents because they realize their phone knows everything and their parents don't. And so takes expectations towards their parents and then they're disappointed and I think here we have a tremendous effect on other, our social relationships.

And so I can only second what you say and just wanted to add another book recommendation.

That's wonderful. Thank you very much. It's such a pleasure. Lucky to have a philosopher in the house. I think we already, kind of  touch upon one of the topic how to manipulate charge ChatGPT.

So we kind of touch upon how it affect user, but what is your take on how people can kind of use that in a bad way to impact the way we live and work in general?

Several will be probably many ways prob probably I'm too optimistic to think all of them through. I already mentioned a few applications. I mean, AI tools are currently already used for some attempts of like manipulating user behavior. Just if you think of the whole debate on surveillance capitalism and behavior protections based on what machines have learned about you and how like the whole online advertisement works and think of how I know the algorithms of social media function and suggest you what to read and people give a lot of power to them to make decisions about what to watch next.

So there's a lot of offices You might call it manipulation already going on with algorithms, or people fall over that and believes that algorithms gives them always right answers. And I think that ity might take us to the next level, as I already said. There probably might be a lot of interest in from people to affect the answers of large language models.

And people I assume that people will be willing to pay for that to manipulate human behavior. You could also  optimize make just think of sort of start a Twitter campaign and, and, and tell ChatGPT to create a billion Twitter accounts to spread fake news about you know, about elections or about vaccinations or about migration or whatsoever.

And think of what could happen with that. So probably the out is rather pessimistic. And I hope I don't have to go into details because I don't wanna be pessimistic all the time. And also don't want to picture what could be done here in the sense. Yeah.

Yeah, that's crazy. I mean, the book is fascinating. The book surveillance capital, I cannot recommend more on the audience to really check it out. It's another level of social media. But now with ChatGPT, you definitely agree with you. It would bring it to the next level. So now ChatGPT, it's only give recommendation, but from the way I using that they don't link chart to any kind of software that we use.

It. Yes, I mean plugin plugins chart, plugin is our, but still it more about the input information. Regarding to output automation. It could be scary. Mm-hmm. Kind of like scary crazy in term of it dramatically impact. I have a friend who will have an imagination, I would say, like recommendation of connecting ChatGPT with our phone and then onto like all the information that our phone is collecting us.

Like say, imagine one day Google and Apple and collaboration with chat. And then using that as as we authorize it is. So for example, a book meeting for us, send email for us recommend a meal for us, a book, other online food delivery for us. Mm-hmm. That scared the  heck out of me because it feel like some people already start getting the feeling of losing control.

It's like, okay, it's so smart now I'm gonna, I'm gonna let it do everything for me. And yeah, it's mm-hmm. It's could easily be manipulated.

Mm-hmm. And I think what, what's also a problem here, and it's often underestimated when speaking about technology in general, is that this also has certain effects to a lot of other people.

So it's often sort of said, presented as yes tools that you could use, but you could also let go. And it's your own decision whether you wanna make use of it or not. But I usually think that this, it's this way of thinking is a bit naive and underestimates things like social pressure. Like you, like if all your friends are using this tool at a certain point you will have to use that too.

If everybody is there to say it's just doing following the recommendations by ChatGPT and then you would miss out on the certain technology and, and technology unfolds its own regulatory power if it's not properly regulated. And I think that is something just to be aware of that most obvious examples that I also talk a lot about is the example from the gun.

The weapons debate in the United States is the famous slogan. Guns don't kill people. People kill people. And this which sort of they should justify the usage of guns because it's, it's not the guns who are the problem, but the people who abuse guns and commit crimes through suicide.

And in this line of argument, often underestimates that w that technology is normative per definition. So just the fact that say, guns out there has a certain effect on what I can do and, and also what, what the people who own a gun can do. But it also , causes pressure for everybody else to weaponize.

And and then you automatically have to assume that the other person might carry a gun with them which  changes the way how you approach people and, and, and here you have a sort of a I think, wonderful example of, well, unfortunately not wonderful, just the opposite of wonderful actually of how a technology effect could affect the whole society in a certain way.

And I, and I think it's the, it's the same. Any other technology, especially if we think about digital technologies and here then, especially if we think about artificial intelligence, that those technologies unfold their own regulatory power and their unfold their own normative and ethical problems that you cannot escape.

So it's also not an option to leave our society and to live on a small island or in the middle of the forest. And they, I don't wanna have anything to do with technology, at least I don't think it's an option for everyone, and I don't think it's an option for anybody who wants to stay an active member of our society or of our communities.

And so we are all affected by it, whether we want it or not. And like for algorithms and artificial intelligence, if companies decide to make the decisions about whether I get a loan or whether I get a job or whether I get a social security say basis decisions based on algorithms, then I'm affected by it whether I want it or not, and I cannot escape it.

And when they start trusting ChatGPT to write letters to me and  and so I cannot just escape it. And I think that's something to be aware of when speaking about ethical consequence and ethical normative dimensions of technologies.

Yep. I mean, that's true. The thing about creating the new, like bleeding edge, state of art, new technologies and creating digital product around that is what  we already been educated at STS, right? So usually people who are creator are smart people also. They're very innovative people. Mm-hmm. And usually this kind of are very curious and they even all the way do the best to pushing forward at the barrier, at the boundary of technology and try to create new thing faster, more addictive, more seamlessly whatever it could be for the user or for the usecase.

But also they also study very well and utilise all the social aspect and the emotional aspect like for digital social media we already have dark patterns? we already have social pressure already utilise I would say pitfalls in our psychology to basically wires or hook us into that kind of cycle.

So now with ai, and also AI can get into it own iteration loop with the combination of human creators and AI automation creation. I cannot imagine more how it could come. Talking about that aspect of ethics  AI ethics and AI safety, and also talking  a little bit on GDPR and privacy topic.

So I heard that Italy already ban ChatGPT and Germany in the consideration, so I'm not sure how updated you are with the topic. Mm-hmm. But what is your take on, what is your opinion on the perspective regarding the regulation?

Mm-hmm. Thanks. It's a good question. I'm, I'm aware of the, that Italy has banneded or considering banning it as think they put a 28 days ban on it.

And I'm not, and I think this situation might change and I think there will be a lot of governments trying to react to it. And I think you raise the issue of the gdpr, which I think was also the reason for the Italian government banning it. And which so has yeah how to phrase it has, has not much to do with how, so say the chat bot works is that rather it's just a legal dimension of it, how the data being put into ChatGPT is used and obviously there's a certain GDPR risk here or there certain risks with regards to the gdpr, which is the General Data Protection Regulation by the European Union that  regulates the usage of personal data by a lot of companies and by all companies that deal with personal data. And which usually gives the user right to control with what happens with the data and tries to make sure that data stay within the European Union are not shared with any external service, now the server from ChatGPT or from OpenAI as they are in the us, most of them are, and whatever you do with them with ChatGPT you grant OpenAI rights to analyze the data. So all your prompts, you give to ChatGPT could be analyzed and the end, it's out of your control.

What they do with that in order to train either this language model or train other language models in the future which also means that if you, for example, enter customer data into ChatGPT because you wanna have an analyzers of it, or if you enter text by someone into Chatgpt, it also means that this text will be uploaded to American servers and you don't have the rights and so you have a lot of like legal created around that. And I think that is what governments currently try to regulate. Not sure whether, I would say hopefully, but maybe successful because it'll also  limit the usage also to certain extent limits the potentials of things like chatgpt.

But things that is a legal background in which we have to discuss that. And you should, yeah, as with any new technology I mean I already talked about why I don't think it's like completely new, but still some consequences are rather than new, one has to wait and how courts will react to that.

And what's the exact legal situation of the usage of chat GPT with regards especially to GDPR, but also with regards to other regulations also in other countries. So that's been my few cents on that. But so I think there's just something I'm currently quite curious to observe what will happen with that.

Yeah. I also curious how thing gonna unfold. I truly hope that the case like Cambridge Analytic happen to Facebook and also other similar cases happened to social media won't happen to chatgpt because I could be very sad to see that we need to uphold on human natural evolution, which is creating something smarter than us.

 Yeah, so I think we have touched on so many point. We already touched on emotional and social effect of regarding to economical effect economical impact, also like future of job market due to ChatGPT. What is your thinking on that?

Oh, I think that's an excellent question.

Probably that's the question for an own podcast episode. But I could try to comment quickly on it. But other very big question, and I think it, that's a question that often comes up when a new technology comes up. Sort say a big becomes popular. So obviously a lot of jobs vanished over the past and a lot of new jobs have been created and we are still not all unemployed.

And just as the opposite currently, there's a lot of demand for people. I think that will be my first observation. Obviously chatgpt will have a lot of impact, I think on the job market, and it'll change a few jobs and like everywhere where you would need to write standardized texts or  you could be sure that those texts will be more and more written by a generative ai.

Also to a certain extent, journalists will be affected by it marketing specialists, writing social media articles, et cetera. And then you, you could probably also raise the point which also with regards to the quality of those generative ai, what does it tell us about our current stage in social media marketing, if those texts could be written by a computer.

So I think a few sort of things. There will be a few changes and also how people will understand Also, by the way, we didn't touch upon this yet, also in the sense of programming, because obviously you could also use generative AI for programming. That's also a huge use case for chatgpt between currently that you said you could use it to find bugs in your software code or that you , use it to program at least parts of your software with it, and you could also and I think there's also some potential here, but it also means that the job description of a soft engineer will change. So you don't have to write code yourself anymore, but you'd rather need to write prompts to a chat bot that will write the code for you.

So that might be something that could happen. It might also be that a lot of us will be affected in the sense of that we will lose our jobs because it might be that less humans are needed in some fields. So this might be a tragic consequence, but I just think but, but I just think it is more important to think about some societal and economic change is necessary in order to compensate for that. So, how do you make sure that you still guarantees that social security in a certain social welfare in a society where no work has to be done by humans anymore? Cause actually that is quite the ideal. That's quite the dream also. So that's also, that's not only the dream of capitalists, but also communists or socialist dream of a world in which humans don't need to do any work anymore. But technology takes over and humans could lay back and unfold their true potential by doing self-development and I know becoming artists or whatsoever.

So I think that's sort of, let's say, I think that's rather a question for governments, how to react to that if the shop market is affected and what, what do you make sure that it's enormous profits that could be made here addressed reputed fairly and equally because the idea to work less or to work on things that you really care about and delegate stupid repetitive administrative parts of your job to an AI.

It seems to be tempting at first. So I rather think we should speak about some social consequences of it. So I wouldn't, I wouldn't mind the chat bots that would reply to a few of my, so I have more time to invest into actually educating students. And I think a lot of people would think that way.

But I hope you get my point. So I think it's rather about like governments finding good compensation and making sure that we have sort of, that's true. That's true. All of us could have a decent life and how to do that, like the concept like universal basic incomes. There's machine taxes, proper taxes, or, I mean, technically you could just apply tax regulations also to the big American companies who make a lot of profits in Europe, but do not pay any taxes.

But , I don't wanna become too political at, in this podcast. And so those are rather things that are necessary to do. If you think about the future of the job market. Now, obviously a few new jobs will be generated, which might also be fine. Obviously not everybody of us could become an AI engineer.

But on the other hand, maybe also more positive, they're always niches in which humans are needed. And like in the new books that that I've wrote together with the colleagues that's going to be release soon. We wrote that nobody would need hand baked bread or hand potted pottery anymore, but still people are still buying it because they appreciate the art of handcrafting such things.

And also they appreciates the taste. And it's often still better than industrialized bread, even though, or industrialized pottery or ceramics. Even though you obviously you wouldn't need to buy that, but people still buy it and they pay a lot of money for that. So I don't think the future of the job market is as hopeless as efforts often presented.

Yeah, I mean, it's such an interesting topic and I actually quite, quite funny. I think it could be very funny if under you have a chat bot on AI to grading. Student assignment, which also being written by AI already. So it's like we don't really need to work anymore, but we mostly think and direct or monitor the machine.

Then the problem is, and here's probably also how it'll affect our educational system to that extent, then is the point, what's the exercise of doing that? That's also means, that's also part of the exercise of why do you write an essay or term paper in your studies? And what's the point of creating it?

If, if I know that the essay was probably written by a generative ai and if the student knows that my creating was done by a generative ai, why are we playing this game? And and then we could, we, then we could as well focus on something else. And then in the end, we also don't need a generative AI anymore.

So I, I think that's that's in a way just stupid to assume that this is how our university assume should work in the future. And I definitely don't want to delegate my feedback for term papers to generative ai. But I also hope. My students don't write the essays just for the sake of getting the assessment, cuz I don't think that's exercise.

I usually tell my students they don't write that stuff for me. They write it for themselves because it's an opportunity for them to learn and to reflect and to understand how certain things work. And coming back to this idea, how important language is towards a human being. I mean, language is also a way to structure your thoughts and it's often the only way we have to structure our thoughts.

And that's in the end also why students should write essays in an ideal academic world because it's a way of structuring the thoughts, obviously. There's a lot of pressure and it's not how university education works in general. Hopefully we are better at code. Is that it's a lot of pressure on students to succeed as early as possible and to finish as many modules and get lost in all the administrative regulatory stuff and doing a lot of exams.

So obviously there's a certain temptation to cheated by using generative AI here and don't do the, all the thinking and reflection work. But then, then I would say but sometimes I'm a bit naive, even sometimes I'm romantic. We should rather change our education system in a way that our students could actually reflect on things and could learn what they actually are curious about.

And if you're curious about something, then you wouldn't want this an AI to do for you, but you would rather like to do it yourself because you want to do the reflection yourself, and you want to have human feedback on that because you want to improve and you want to have better thoughts, et cetera.

So in the end by using a chatbot or by using generative AI to simply to delegate it, to do this task for you, like writing an essay then probably says something wrong way before that. And one should have contacted the lecture on why to write even a stupid essay if you're not interested in that and find alternatives.

So that, that will be my few such idea of students writing as an, I use generative AI for creating be because we don't have to cheat each other here and pretend that somebody wrote that essay and somebody has read that essay, thinks that's a game, nobody benefits from it all. So rather think of how we could change that game and design learning environments where everybody could profit from them.

Yep. Yep. I absolutely agree. I mean we have touched upon so many things. So let, let me break down thing into different component or subtopic. So first of all I have some also discussion with my friend who are like founder in generative ai. Right. And the way we really think of new about how ChatGPT changing job market is like the new industrial revolution. So it's like the data revolution when we produce much more output with less human in the loop, which like it's the debate around some people think that human will be completely out of the loop, but I at least from what what I see from history, it seem that humans do some way wouldn't be in the loop because human wouldn't direct ai.

And of course AI is simp point wouldn't become self improvement because as a whole part, machine learning. However because AI now still in the debate of consciousness. So for as long as we still don't have any good approach to providing consciousness to ai it feel that AI still have no intention.

Mm-hmm. And when there's no intention, it seems like human should be there to give it the intention. It could be a very good assistant, but it would never really be the boss as a driver in the driver's seat. And yeah, also like with that, I was always wondering how our whole education system also our skill set kind of change, like for GenZ like us  in general, digital skill is required.

So basically social media skill is something you need to have it, you can have low social media skill, which means you don't have a lot of social media presence, but it barely so hard to find someone in a millennial or GenZ that have no social media skill at all.

I don't have any social media and I also do not have any social media skills at all. I'm still surviving. So if you want to connect with me, write me an email. But don't social media, I mean, you could call an email also social media and probably you could also call like a meeting in a pub, a social media. So if you're up for that, always reach out or rather for a coffee place or whatsoever.

So, so, so

that's so cool. But you have Zoom skill and also you have Slack skill. And also email belong to internet booming. So basically it does belong to digital skill. If we might be better, right?

I hope to say of myself that I'm aware of how those how those tools and how, and websites and how that offerings work.

So I think I also know what Instagram is and also know what TikTok is. It's also not that I've never been on the website, but I also don't think that you need a specific skill in order to, to use those tools. Rather personally I think it's a little bit over. I mean, you need a certain scale in order to market yourself on social media and to become successful here.

But yeah, let's not talk about social media here. Otherwise I would be branding too much.

Yes, that's true. That's true. I mean, it's a different level. You can be an user and then you can be a creator, which is and then you can even be a coder which like trying to build the next social media with more advanced feature which is really different level in the play field, digital product.

Yeah. But talking back about it, what I meant here is AI skill, if I may say something like that. And the position are kind of like how places like code, you know, like how very innovative educational institution like Code, who trying to educate digital pioneer of the future. All the person already.

Mm-hmm. Could play a role in the whole ecosystem. Because the thing, the thing is that going back about the essay, so everyone can write essay mm-hmm. It it about a different level of how complicated, how sophisticated they can express their language and try to express their thought and also the perspective surrounding a certain topic, right?

Mm-hmm. Using ai, it could be much easier or much faster, but also the topic is chosen by the student. Mm-hmm. So why they design the structure talking to the AI also might be coming from the student. There was one very interesting topic. I talk with Henry and trying regarding to prompt engineering.

Mm-hmm. It's like how how can you get the best output? And it's actually very hard to say like the best here because it's very hard to put them into a benchmark or comparison. But the better input you putting into the prompt  and the better input open AI already putting in for us as kind of pre-design, pre-architecture from pre prompting is the whole training data set, the whole approach to training data, right? So the better the output gonna be. So in general, I feel like wisdom should have some kind of AI know knowledge, AI skill, which is like understanding about the AI tools that we are using. Which type research maybe did they view that on?

What type approaches did they use that on to, to what type architecture did they use? Which data set do they have? Mm-hmm. And there's also an a debate surrounding that when I listen to a podcast like I like Friedman talk with Sam Atman from Open AI regarding to open-so urce ChatGPT or not, and for people to really seeing the data set and to really seeing the engineering, the architecture behind that.

And also, and AI skills here. What I feel like we should have not only is that kind language to understand it, but. The skill set at least to do prompting properly and also prompting engineering and also maybe to the higher level to become a creator to really writing some code to contribute to the code base, to ensures at the AI is less bias because the pupil who fit the data is kind of skewing toward a certain group of pupil in the society and all this kind of stuffs.

So yeah, it's very interesting. When we think about the future of education in the world of ai

mm-hmm. Think there was so, so much on it. I think I agree with most of what you have said as I think what is crucial also for us as an education of as an institution of higher education is that that our students know what they're doing and they know what artificial intelligence is, and they at least know in theory how it functions.

And I think, I think that's a, that's. Something in theory everybody should at least be aware of or also  some people who are not , who don't wanna be digital pioneers, but obviously somebody who wants to work in that field probably should be aware of what's going on and also how such things works.

And obviously that it's not realistic that we could educate our whole society, our whole let's say everybody within our society and, and everybody will fully, fully grasp exactly what machine learning is and how it works. And for that, we also need to find ways to compensate for that and also need to make sure that they are not left behind and we should design our systems in such way, or our social systems or social security systems in a way that those tools cannot be used to manipulate people and could still be beneficial to everybody.

Even for those who don't exactly know how those tools work. Sorry, there was a, there was also so much in my input in my, in my, in my sentence here, but I think that is where, where, where governments are needed for our students. I think it's still important to understand how it works also.

Cause they're working in a field where that in which artificial intelligence and all applications play much future role. Obviously you also need to know how, how, how to how to engineer prompts for generative AI and how to use it as a tool. But I don't think that we should stop with how to use tools because the tools, especially in software engineering that I use, they change over time and time.

Also, the, the the programming languages that are used. And and, you know, I, I'm still fond of that. I learned that I learned Pascal is my first programming language and nobody's using that anymore obviously. But but I also think the basics of how programming work have, have changed no matter what programming language you use.

So a little bit to cetera, it a little bit, if you, if you know one of them, you know all of them at least you know, in general how they work. But what our students also need to know or need to understand is just some basic, basic skills of how to work in a team and, and how to work together and understand that software engineering or like designing digital products is not a one man or one woman show and anymore,  they're often like large teams involved.

Then you need, you need to know how to work within those teams. You also need to get a sense of what, what are current developments, how to how to adjust to them. And and I, and I think that is something that hopefully code provides in a, in a in a way that is very applied and lets our students say, strive for their own interest strive for the curiosity and build projects around topics they're interested in and learns this technical skills while doing that.

But at the same time, also offer an education that also makes them aware of how those technologies work. And in a way, I like to describe the mystifies the magic around artificial intelligence and a generative AI and large language models. Is that,

Yep. Yeah. Right. I mean, the whole concept of try to learning thing by building it from scratch.

Exactly. I may say it, yeah, it, it's so true. Like I, it also applies so well for me because so fundamental thing of the first principal thinking when we learn something, it actually is the same. It broken down into a step. It broken down into workflow and better as a tool use is just like a faster horses.

And actually it, I think it somehow also become a little bit trickier because having an AI to me feel like having a speed, a sport car, and if the person doesn't know how to dry it actually doesn't really matter. Mm-hmm. How fast the car they own. Mm-hmm. So it kind of very interesting. Hmm. Like it's a very simple usecase, but like when I starting the podcast, I also didn't know how to do your editing and I was kind of struggling around that.

I need to ask my friend a lot and also try out the AI tool. The AI tool actually super cool. It speed up the speed of doing outdoor editing, like could say like mm-hmm. For time. I dunno, but it's just a rough a automation. But however, in the end today, I still need to, going back to the very kind of a fundamental popular tool in the industry out a city because it's the same concept which is apply in a different way, like visually.

Like, for example, like, like we, we, we talk with draft in the first episode how they actually do automation in a faster way by using visually like visual learning. Mm-hmm. Top process voice control. And that's also the same like another AI tool. I'm using a script, which is visual as a text description using ai and then it make audio editing somehow faster.

But the same concept, cut, copy, paste more painting around drag and drop, which kind of very interesting to learn. And yeah, I could say kayak relate to what you say, like how the happiness feeling when we really learn something by. By doing from scratch and really understand it to the point.

Thanks. I think you also said something else that I think is very important and we didn't really touch up on our conversation yet, and, but I should mention especially towards the end of that conversation, because especially in my role, a philosopher or philosophies is often put into a role of being negative and only speaks about what to say dangerous is that are, are related to certain concepts. And I think you said something beautiful if you say it's also a lot of joy was playing around with those tools and and often I think that that is what everything was.

Regarding digital technologies is it's a joy to play around with them, and I'm, I'm fascinated by them. And I think that's also something something that should be mentioned in this podcast. It's not about only talking about negative aspects and only about talking like what, what danger of it brings to our society.

But it's also not about ignoring those dangers, but still we need to find a way to deal with that. And we need to find ways to live with this technology now that's out there. And one also could think of so many cool things one could do with that. I mean, just talking about artificial intelligence in general or machine learning methods to.

That could help in medical diagnosis that you could use to to build warning systems for natural catastrophes like earthquakes and tsunamis that could  save a lot of lives. So I, I think there's also a lot of potential here and also a lot of potential usage of artificial intelligence and hopefully also of text generators like like ChatGPT .

And I don't want to condemn all of them by only speaking about dangers of some possible applications of it. And I think what is most important at least for me personally, technology is also just fun. And it's also just a joy and it's fascinating and to, to, yeah, to to work with computers and sometimes I think that something like programming is just a way to structure the world and to structure my own thinking.

And in a way, maybe also, and I think here I'm really inspired by a lot of our students To create new worlds and to think of how to create new worlds or like how to make our world better. And we also talked about possible ideas of how chap b d could be used to manipulate. But I'm also quite sure and quite confident that a lot of people, and hopefully also a lot of our students come up with ideas of how to use generative AI towards a positive cuz I also think that a lot of startups have a lot of ideas that could really affect the world in a positive way. And I hope some of them will be successful and we will hear from some of them more the midterm or in the long term, or maybe some of them even in the short term.

Yeah, I mean, you are right.

I mean the role model of using ai as assist in the end of the day a tool, right? Very smart tool as a tool to bring humanity forward and make a better life for everyone is a very important point because like I'm  about the fun, it actually so saying it kind of an important process and also it's very interesting thing when I'm going to have the joy of the fun of creation also.

I would really hope that in the future of accessibility. Like we already have a lot of problem with accessibility for different group of user when we try to create digital product. Mm-hmm. Like for a certain group of the world when they don't have internet or like very fast internet or when they cannot assess a certain digital product or like we're talking about user interface, also user experience some product it just exclude certain group of people, right? it is already a problem on itself. And I hope we all have learned about that. Like it probably become a very wide known knowledge by now and hopefully it won't happen to the AI case where everyone can have the chance to become a creator, to enjoy the feeling of learning something new, emerging into a new tool.

If it's quite sophisticated, if quite new state about technology, it might be a little bit scary in the beginning, but in the end when you have the sense of control, actually very rewarding. Mm-hmm. Yeah. So we have gone a very long way. We have examination a lot. We have talked about history of ChatGPT.

We are also talking a lot about impact of ChatGPT. We also explore a little bit about future changes that could come up. We also mentioned about how to work with ChatGPT, but maybe if we can go a little bit deep down and specifically on that what do you think could be the usecase of ChatGPT in academic research, also in study and in working life?

I think that's exactly why I'm offering this seminar right now, because I want to experiment with that and wanna see that, say how one could use that. And I also want to see how far you could get with using that. I don't think that that you could use generative AI for conducting academic research properly.

And also don't think that you could come up with something new or like valuable, something very original with ChatGPT or with any sort of chat bot, because as I said, it's just a mirror of what's already out there, thinks that is how it's trained. So I would be surprised if there would be any  defining findings out there, even though some of them might be creative in the end.

But I mean, one could use that in many ways also as a practical tool to to make your own, so say studying research a little bit easier. So Right, right now, if you want read up on a topic for the first time you, most people both lecturers and students go to Wikipedia at first.

And I think you could as well just ask generative AI and maybe you would get an even better reply from it.

Better, better. Google search.

No, it's just, it's just a better version of Google search, which is also oh. May, maybe to a certain extent Google will be, feels threatened by it and probably will also come up with their own large language model quite soon.

I mean, they also have released something, but goo Google was surprisingly passive in the sole debate so far. But yeah, so, so, so the, I think in this sense, ob obviously chat could help a lot and I also don't think there's anything wrong with that. And also don't think there's any, anything wrong with students using that.

Also, like other academics Also in terms of like style correction I mean, a lot of like tools for for correcting your grammar and correcting your spelling, et cetera. Also, and spelling used to be quite a big thing. When I went at least to elementary school and, and also to high school later, nowadays, you don't need to know how to spell anymore.

Because you, you have tools that automatically correct your spelling. You also have tools that correct your grammar and, and make it  readable in a better English, which helps a lot for non-native speakers, including me. So I, in a way, I would, I would say one would be stupid, not making use of those tools in order to to, and one could also , think of it in a more positive way.

One could also use those tools in order to improve your own writing skills and your own language skills by training your language together with the chat bot. And I don't think there's something wrong with that. Just as the opposite, there's huge potentials there in terms of  how academic essays are written, which, which is also  something, and I shouldn't be ranting too much in public.

But there, there's also often something wrong with the publishing industry within academia. Obviously it's I, I think for some people, and I'm lucky that I have jobs that doesn't involve that publishing is like it's like factory work. You just need to publish 10 papers a year, otherwise your career is in danger and you won't find another job, et cetera, et cetera.

And so obviously there's a lot of temptation for academics to to write the same stuff over and over again. And this leads to standardization and, and it just leads to bad research. And but and I think that's exactly the kind of research that you could also sort,  delegate to generative ai.

But let's also say, and to speak quite bluntly here, that's also the kind of research that nobody really needs. So you could as well just , do something else, because it's also the kind of research that nobody actually wants to read. So it's similar to the example we had before with students writing their essays and with chatty and I give feedback with chat G B T, why is everybody, anybody involved in that game at all?

So why would anybody produce standardized research at all? We could as well as a society live without that and could just focus on more important things. So yeah things else, what I have to say here. So obviously as a tool to improve certain aspects of research, maybe also as a tool to do some first literature research and to get inspired what a possible arguments or to get an overview over a debate.

I think that it might be useful tool, like something like Wikipedia or Google are nowadays. But in the sense of that it could be a proper substitution for like really cutting edge research. I don't think there's there's much, much hope for generative ai and I also think it's good that way if there's kind of research is done by human beings and not by computers that doesn't understand what it's doing in the end,

it's yeah, it's very interesting what you bring up to the table.

It actually, we have an accident. It's like we didn't intentionally do it, but in the end we have a kind of similar testing experiment with generative ai. So basically last week me and some friend went to start Hackathon in Switzerland. Mm-hmm. And it's quite competitive. It's one of the biggest hackathon for student here in Europe.

And we choose the challenge to do sustainability with Google. Mm-hmm. And as I am a very AI passionate and so I kind of want to use chatgpt for everything. Mm-hmm. so i run through chatGPT all of the potential sustainability idea s. All of the sustainability ideas could come up. Also tells to generate on the potential dataset, our data like api, open source api, or whatever it can give me to create a nice project within like two days.

Mm-hmm. My friend went through a completely different route, like they have been embedded in the sustainability field for a longer time. So basically they have much more intuitive feeling also like understanding knowledge consciously and subconsciously when it come to the topic. And they try to come up with only the idea and also search for the data set by themselves.

Mm-hmm. And then it's even come further to another point when we finalize on the idea and we build a product on it. Like some friend try to build the front end using like co-piloting, working with it on the side because he's kind of like beginning on his journey, working with React. And it make it much faster.

Well, another friend already working with backend , using Rust programming language for a long time and he just feel like, oh no, I don't need ChatGPT I  already have the feeling of the programming language and I can do it much faster on my own. Mm-hmm. And it fascinating when we see this kind of A/B testing experiment running on parallel.

And I would say in the end of the day, actually very interesting to see such a like, kind of like closed, or tie if this is in a competition or in a match. Because they came up with the same idea that I actually tell ChatGPT to write for me. Mm-hmm. And when we went out to the dataset ChatGPT a little bit faster in generating the dataset apparently because now it know what it wants, it kind of have a very specific goal mm-hmm being assigned to it so it can crawling the internet or crawling resources in a much faster way than we do as human. But the idea part, the general the creative part of creating a project, a digital project, you know, like make it feel similar to when people do academic research, the academic question, the hypothesis, the curiosity, which is very hard to actually be replaced by ai.

Mm-hmm. Which is, yeah, like, blow my mind. It has been a very interesting experiment for us.

Yeah, I mean talking about it also interesting when we talk about creativity, right? Like we have a kind of small discussion on mm-hmm talking. Can AI be creative? Can AI come up with revolutionizing idea, not only in academic research, but also in other field, right?

Okay, there's huge debate around with regards to the arts or whether was there I could generate music or paintings and all the sort of the written text, like poetry and I mean, there's a lot of things to say about it in the end. It depends on what you mean by creative, but a computer cannot, in my opinion, per definition, cannot come up with something original and something completely new. In the other word. What everything that a computer does is predetermined, and everything is already determined by how electronic circuits work and how they react to each other. It might be like, the end result might be surprising if for the end users, similar to there's the same as monkey and typing as a typewriter example, it might be surprising what, what a monkey could come up with or but.

It's hard to say that this was creative, in the sense of coming up with something new. Which is by the way, also debates that is way older than the invention of the computer. And that this famous argument brought up by Ada Lovelace who's living in the 19th century mathematician so of say you, you could say the first woman into computer science, you could also the first person in computer science because she's the one who was the first computer program, based on the analytic engine sketched by Charles Babb.

And she, she famously claimed that that anything such a machine could come up with cannot be surprising anymore. And that was an objection being made popular later on by Alan touring and discussed intensively. So also debate is way older and also this idea that humans want to outsource their creativity towards machines is all the way older.

So, and you find a lot of experiments of academics or also like artists playing around with automaton and falling for solution as I explained it in the beginning for our conversation already. Yeah, and that something else, humans could be creative I'm not sure why humans want to have it that way.

Not sure where the stream versus illusion could come from, but I personally don't think this will be possible. But obviously you could produce art with generative ai, or you could produce art with machine learning. You could produce art with any sorts of artificial intelligence.

They could use the computer as a tool for producing art. And it, but in the end, there's somebody, who needs to yeah, classify it as art and so it's a decision by the artist who puts it in the museum. whatever generative AI is creating, using the bad word.

So one could argue that the art here, or setic creativity that humans decide that this  is a piece of creativity, but in the end I don't think that computers could come up with that.

I see. Do you think it have any relationship with consciousness subconsciousness?

Yeah. Intention emotion and also as a human, we always have the tendency to, I mean, the creative human have the tendency to be quite diverse. For example, they can be both a physicist and also like a musician at the same time. They try completely different thing, bring makes or match different thing together, mm-hmm pushing the boundary, be adventurous to try something they never done. And then they can make the leap frog of the creation or innovation.

Mm-hmm. I think the answer is clear. Yes. I think that is what creativity, creativity is in the end all about.

And I think that is also what makes human humans, it's also what makes humans interesting. It's also what makes art produced by humans. Interesting. And you inspiring and I don't think that without going into too much detail, because that's a whole debate in itself but I don't think that computers will ever be capable of having something like emotions or consciousness or anything else of really real meaning towards what it means to be human.

Thank you. I could be very curious and looking forward to see that how people try to develop AI in the future now when there's not only one approach to develop a lash language model. Right. They can use symbolic ai, try to view a knowledge base also, we can combine by object detection using other sensors in combination with neural net. And we also have neural link, which mean we're trying to importing our memory, our thought process and putting it into the ai. So I will say it's just fascinating for me to see how thing gonna unfold in the future and for that debate.

Mm. Yes. Getting to the last part of our discussion today. Mm-hmm. So we already visited the past and also the presence of ChatGPT, we really talk in depth about like ChatGPT and own it influences or impact in the world right now. How about the future of ChatGPT what is your kind of  vision or envision regarding to future generative AI in general and ChatGPT  in specific?

Oh, that's a good question. I think speculating about the future as something that philosophy it usually isn't good at, and also if philosophy should be quite careful about doing. Unfortunately, there are some philosophers who still have like really big visions and big speculations, but I don't think that it's advisable to speculate too much about what will happen because all the humanity has been mistaken about some speculations in the past, a lot.

And, I think the, the field is full of people who are having massive speculation about what will happen. You mentioned a few people and you mentioned a few ideas, like I know the whole idea of neural link or the idea of transforming humanity into some sort of cyborgs or into trans humans or post-human etcetera.

So I think there are too many speculations and well speculations out there and I would rather hesitant to contribute to it, but rather would look at the near future. And I think the near future is full of challenges also with regards to artificial intelligence. And and with regards to social changes that a cure due to artificial intelligence and a a rather would suggest that we focus on the social problems created by it currently instead of making wild speculations what will happen in 100 or 1000 years, or maybe in 2045 when we could expect singularity as records were famously claimed. But rather let, let's look at what happens now and look at the problems we currently have. And don't, don't fall for wild speculations and also don't fall for the illusions that AI could ever substitute anything that is at the core of humanity.

Yeah, it's so interesting. I cannot agree more. I feel like everything is created by human and for human and everything started with a dream. But only with collaboration of everyone from researcher to technologists, to engineers, to users, to marketers, to everyone that actually have create the dream and make it come true.

So that's just a last wrap up. Thank you very much for the very in-depth discussion. We couldn't have a better honor to, to have someone, a lecturer, a philosopher, also someone like who very in depth involved with technology like you in  the podcast. If there only one thing you could tell our audience as a takeaway this episode what could you like to, to tell us about ai?

Thank you very much Huyen for having me, and for that opportunity to have that conversation. And also thank you to, to everybody who listens to this podcast and especially also thank you to all my students who inspire me and challenge me to think about those topics and from which I profit and learn a lot. So you asked me for one piece of it twice, and maybe it's just a change of perspective.

So a lot of people sort of see, are astonished and by what artificial intelligence is capable of doing. And instead of asking yourself how magical artificial intelligence is, rather ask yourself how magical a human being is and how much you could learn from humans instead of how much you could learn from computers.

Thank you very much. It's a very nice takeaway.