Prompt Engineering for Relationship Dynamics Designing Conversational AI for Conflict Resolution
Table of contents
- AI can simulate therapy, but it lacks the strategic adaptability of a real therapist. To truly support conflict resolution, we need a multi-level AI that combines interaction with ongoing analysis and adjustment.
- Creating a multi-level AI system can revolutionize therapy by combining human-like interaction with strategic analysis, making mental health support more accessible and effective for everyone.
- The future of therapy isn't about choosing between human and AI; it's about making mental health support accessible to everyone, everywhere.
- Mental health innovation is crucial, but the human touch in therapy remains irreplaceable. Technology can enhance support, but it’s the connection that truly heals.
- AI won't replace therapists; it needs human insight to truly connect and understand.
- Embrace technology, but never forget the power of human connection; it's the bridge we need to truly thrive in a digital future.
- AI can be a safe space for self-expression, offering support when humans can't be there, but will people embrace it if they know it's not human?
- Trust in AI for therapy hinges on data privacy and the human touch; without it, the conversation may never truly begin.
- The therapy landscape is evolving, with Millennials and Gen Z embracing AI-driven solutions, but concerns about data privacy and confidentiality remain paramount.
- AI is revolutionizing access to knowledge, but we must ensure it's personalized and context-aware to truly benefit individuals.
- In a world where data is currency, safeguarding our privacy is not just a choice—it's a necessity.
- AI can mimic empathy, but true understanding is a human trait we must protect and enhance.
AI can simulate therapy, but it lacks the strategic adaptability of a real therapist. To truly support conflict resolution, we need a multi-level AI that combines interaction with ongoing analysis and adjustment.
[Music]
Today, we will have an online discussion focused on prompt engineering for relationship design, specifically on designing effective conversational AI for conflict resolutions. This is part of the MBTB online talks, and we welcome you to join us. Today’s session will feature speakers from four different categories: the Tech side, the Business side, Health Science, and Wellness. Each of these areas will contribute to our discussion on the topic.
I would like to introduce our speakers for today. First, let's welcome Yor Chuno, who is a narrative designer and creative leadership professional. Next is Rafael Braga, a business development expert, AI specialist, and entrepreneur. We also have Dr. Mauna Yusu Kadiri, a neuropsychiatrist, psychotherapist, and digital health expert. Finally, we have Olga B, a trainer of nonviolent communication and psychologist. This is our team for today’s talk.
We are ready to start, so grab your coffee or cup of tea! Whether you are with us live on the broadcast or listening to this recording on YouTube or LinkedIn later, we appreciate your presence.
Now, I will hand over the floor to Yor Chuno to open the topic. Please hold on while I share my screen.
Hello everyone, I am Yor Chuno. My journey has taken me from directing film and advertising to exploring the intersection of AI and psychology. Today, I will be talking about AI in relationship conflict resolution, which is a very exciting field. However, I believe we are missing something important when discussing how to use AI in this area.
Before we dive into our main topic, let’s briefly discuss what a large language model (LLM) is, such as ChatGPT, and its connection to prompting, as well as why it is important. If you are not deeply familiar with technology, you may not know that current AI can be likened to something that has read every book, research paper, and web page on the Internet. It has an extremely large repository of human knowledge and can generate human-like text. This sounds like a good start for creating a virtual psychologist, as it knows everything about human behavior and psychology.
However, this is where prompting comes into play. If an LLM knows everything, it still cannot function effectively without context. A prompt serves as a lens that focuses this knowledge for a specific purpose. For instance, you can instruct an LLM to behave like a therapist. If you have tried this, you might find that, in the moment, it can appear to act like a therapist. However, if you analyze the entire conversation, you will notice a significant problem: while it may show empathy or ask the right questions at times, it does not produce the real therapeutic results that a human therapist would.
Herein lies a major issue. I believe that if we attempt to create an AI psychologist, we must address a critical part that is missing in all current products on the market. LLMs cannot develop strategies or change their behavior, which I refer to as dynamic strategic adaptation. This is how real therapists operate. They not only converse with you but also make educated guesses to better understand the issues and potential solutions, all while testing and adjusting these guesses. They then choose a strategy based on what these tests reveal.
Currently, if you think about a single-prompt product, it cannot perform this kind of strategic adaptation because LLMs lack this strategy component. I propose that it is possible to create a multi-level AI system. For example, we could have one level focused on interaction, which directly engages with the user, similar to current chatbots. More importantly, we could have a second level dedicated to strategy—this would be the "mind behind the mind," constantly analyzing the conversation, making educated guesses, and adjusting the interaction accordingly.
Creating a multi-level AI system can revolutionize therapy by combining human-like interaction with strategic analysis, making mental health support more accessible and effective for everyone.
To better understand the issue and explore possible solutions, we need to focus on testing and adjusting strategies based on what the tests reveal. Currently, if you're considering a single prompt product, it cannot accomplish this because inside the large language model (LLM), we lack a strategic component. However, I believe it's possible if we utilize a multi-level AI system.
For example, we can establish one level as the interaction level, where this part directly communicates with the user, similar to current chatbots. More importantly, the second level is the strategic level, which acts as the "mind behind the mind." This level continuously analyzes the conversation, making guesses and adjustments to the interaction.
To illustrate how this could work, let’s consider the concept of rapport. People who are well-versed in psychology understand how critical it is to establish rapport during any consultation. We need to create a safe space for clients, adapt our communication style, and recognize cultural differences and emotional cues. Currently, LLMs can partially achieve this, but unfortunately, they do not do so very effectively.
When we think about how to improve this, we should aim for consistent flexibility. We need to understand what people truly need and want, tailoring our communication accordingly. Returning to the multi-level system, the interaction level functions similarly to how GPT or other LLMs operate today. However, this system includes an upper level that constantly listens to and analyzes the conversation, providing advice or instructions for the interaction level. This can aid in forming a better connection with clients.
Understanding this dynamic allows us to choose different strategies for different situations, as every individual is unique. It is impossible to convey a single prompt that encapsulates the best behavior for every scenario. Additionally, this system can evaluate how rapport is being established in real-time and suggest improvements. For instance, when a client approaches you with a perceived issue, the conversation may reveal that the actual problem is entirely different, prompting a change in strategy.
All of this analysis feeds back into the interaction level, creating a communication style that feels more authentic and human-like, akin to that of a real therapist. This is crucial because many companies are currently attempting to develop similar systems, often motivated by profit rather than genuine concern for mental health. However, I believe that if we harness this technology to promote mental well-being, it will be more challenging to manipulate individuals who are healthier and happier.
I hope that one day we can create a product that genuinely enhances people's happiness and mental health. Thank you so much for your attention.
Now, I would like to invite our guests to share their feedback. Do you agree with these points? What is your vision, especially from your perspectives in business, medicine, and wellness?
Who would like to start?
"I can start," one participant interjects. "For me, this is a very interesting point, and I thank Igor for bringing up such compelling material. Personally, I feel that the conversation should not center around whether we should replace human therapists with AI therapists, but rather how we can make therapy and psychology more accessible to those who currently lack access.
The future of therapy isn't about choosing between human and AI; it's about making mental health support accessible to everyone, everywhere.
To our topic, I would like to ask our leaders and guests today to give their feedback. Do you agree? What is your vision, especially considering your perspectives from the business, medical, and wellness sides? Who wants to start?
Maybe someone from the field?
I can start. Thank you, eager, for bringing up all the very interesting material. For me, it's a very interesting point. I feel like the conversation should not be about whether we should bring a human therapist or an AI therapist, but rather about how we can make therapy and psychology accessible to people who would never have access to these services. This is one of the key points that I find intriguing.
As you mentioned, one thing does not substitute the other. There are different ways in which AI can help create tailored solutions. However, it is crucial that technology does not remain separate from humans. We need psychologists, therapists, neuroscientists, and all these professionals who understand how the brain works to train AI effectively. They can also help label the responses that AI generates, ensuring that it learns what is right and what is wrong.
I would like to pose a question for discussion. One challenge that I cannot understand is how we can solve the issues of body language, tone of voice, and overall cultural perception. For instance, as a Brazilian, I can relate to another Brazilian and understand some of the struggles and pain they experience. How do you see the possibility of AI someday being able to do this? What can we do to equip AI with this kind of knowledge in their knowledge base?
That’s a very important question. How can we read all this non-verbal communication? I believe that in the very near future, it will be possible. Right now, some large language models (LLMs) like ChatGPT and others can understand emotions from sound, but they cannot see you. Some systems can see you, but in my opinion, they do not work very well at the moment. However, I am confident that this will change rapidly.
More importantly, if we can develop such systems, it would make psychological help accessible to more people. In many countries, it is not feasible to visit a psychologist’s office and talk about personal issues. I am sure that for some individuals who cannot go to a therapist, these systems can provide support. Additionally, they could assist individuals between therapy sessions.
Thank you, Rafael. Do you have anything more to add, perhaps from a business perspective?
Sure! Nowadays, we are seeing trends in how conflict resolution and psychological assistance are evolving. In my view, there are at least four major trends and clusters of companies in this space. The first one is personalized therapy tools. In this sector, we have companies like BetterHelp and Talkspace, which are already multi-billion dollar companies and are driving significant change.
I also see some very interesting conflict resolution tools emerging. For instance, I know of a startup that has raised over 100 million Euros to develop an app focused on mental health. There are numerous possibilities here. I have discussed with a friend of mine, who is an entrepreneur and a lawyer, the idea of creating solutions for conflict mediation in the workplace. In large multinational companies, it can be challenging to understand all the conflicts that arise. However, with the right tools, we can better address these issues.
Mental health innovation is crucial, but the human touch in therapy remains irreplaceable. Technology can enhance support, but it’s the connection that truly heals.
There are some tools that are focused around mental health, which really help people to be more calm. As you mentioned, it starts with us, the people, and then there’s conflict. As I think AA mentioned, one of them is raising over 100 million Euros as a very small startup to build this kind of app. I feel like there are a lot of possibilities here. I have discussed this with a friend of mine who is an entrepreneur and a lawyer about bringing this kind of solution for conflict mediation in the workplace.
Many times, in a big multinational company, it can be difficult to clearly understand all the conflicts that are going on. However, with some tools that can help different employees and stakeholders to be on the same page, I really believe there are many opportunities. I know of two companies specifically: one is called Bravery; I don’t know their exact value proposition, but I know that in their last funding round, they raised 26 million in investment. The other is an American HR platform that has built tools for conflict mediation and has raised over 330 million in funding.
This is a very hot topic right now because we all think that the innovation this can bring is huge. However, I feel like it’s still not clear what we should innovate, what should be kept traditional, what should be handled by machines, and what should be handled by humans. These are my considerations, and I think they are points that I am looking into, which really brings me a lot of curiosity.
Thank you so much, Rafael. Now we will go to our next speaker, Dr. Mauna.
Thank you so much; that was a very beautiful presentation, and your point of view resonates well with me. I am from a country where we currently have one psychiatrist for 1 million Nigerians and one psychologist for 100,000 Nigerians. There is obviously a huge deficit of human resources, and technology is one thing we are definitely looking at to leverage in order to build Mental Health Equity that is available, accessible, and affordable. So, technology is a big focus for us.
However, I want to share a case in point from when I started my practice. Before the COVID-19 pandemic, at Pac Medical Services, we put out surveys to our clients and asked them to go online for virtual sessions, whether it was for marital conflict or workplace issues. Initially, they did not want virtual sessions; they wanted to see their therapists in person. They wanted to feel, interact, and communicate. They were not having it. But COVID-19 changed the narrative, as people could not go to see their therapists and had to accept virtual sessions, which became our new normal.
During the first year of the COVID-19 pandemic, the World Health Organization reported a 25% increase in the rate of mental illnesses, with women and children being the most affected. There were also many challenges with gender-based violence. We had to find ways to interact and help resolve these issues without succumbing to the pandemic, which had to be leveraged through technology.
While I appreciate the potential of technology, I believe that multi-level AI is a significant aspect to consider. When therapists talk to someone, they want to create a rapport and build a safe space where individuals can open up. People come to therapists to tell them what they think they need to hear, rather than necessarily what is truly going on with them. As therapists, we check both verbal and nonverbal communication. Can AI do that?
This brings me back to the early conspiracy theories about AI taking over jobs. It is becoming increasingly clear that AI is not going to take over our jobs; rather, we need humans to provide information for AI to be more effective and productive. Therefore, multi-level AI for counseling and relationship issues is essential. It is something that can help people feel that there’s support available.
AI won't replace therapists; it needs human insight to truly connect and understand.
In the context of therapy and counseling, there is a significant concern regarding the effectiveness of artificial intelligence (AI) in understanding human emotions and communication. They think you need to hear not necessarily what is really going on with them, but as therapists, we know the importance of checking both verbal and nonverbal communication. This raises the question: Can AI do that?
Reflecting on the early conspiracy theories surrounding AI, it has become increasingly clear that AI is not going to take over our jobs. Instead, we need humans to build information for AI to enhance its effectiveness and productivity. Therefore, developing multi-level AI for counseling and relationship issues is essential. Such technology could help create a sense of empathy and rapport, which is crucial for effective therapy.
Many clients have expressed that their therapists did not understand them due to cultural differences in expression. This highlights the need for culturally appropriate therapy. I completely accept that building multi-level artificial intelligence for conflict resolution and counseling is highly needed. This development could help people feel more comfortable, knowing that the AI is listening and assisting in breaking down information to resolve therapy issues effectively.
However, as Rafael pointed out, there are limitations in how AI interacts with users. The way AI communicates can often feel mechanical, lacking the human touch that is vital in therapy. Human connectedness is essential, especially in the aftermath of the COVID-19 pandemic, which has left many feeling isolated. AI could play a role in bridging this gap.
Our telemedicine platform, Hard Body, aims to create mental equity by delivering culturally appropriate therapy and developing mobile counseling booths to reach communities. However, we face a significant challenge: the deficit of mental health professionals, with one psychiatrist available for every one million Nigerians and one psychologist for every 100,000. This underscores the urgent need for technology in mental health care.
While I acknowledge that this transformation will take time and is not a simple task, we must strive to ensure that AI feels more humane. It should create a safe space, foster rapport, and demonstrate empathy. We need AI to help us understand the complexities of mental health issues, as depression is a spectrum that varies from mild to severe, and some cases may involve psychotic issues.
The goal is to develop multi-level AI for counseling and therapy that feels more human, allowing people to relate to it better and embrace it. The future is undoubtedly technological, and for Nigerians and Africans, technology is crucial to bridging the gap in mental health resources.
In response to Dr. May's comments, I find it fascinating that user engagement drops significantly when users know they are interacting with AI. In my experience building chatbots in both corporate and startup environments, I have observed that user engagement can diminish when users realize they are not communicating with a human. This reinforces the importance of making AI interactions feel more natural and human-like to maintain engagement and effectiveness in therapeutic settings.
Embrace technology, but never forget the power of human connection; it's the bridge we need to truly thrive in a digital future.
People can relate to technology better and embrace it because we know that the future is technology. Speaking on behalf of Nigerians and Africans, we definitely need technology to bridge the gap, especially since Human Resources is highly limited.
Thank you so much, Dr. May. Can I make more comments before moving to the next topic? Yes, please. Just one thing about the last comment you made: I think it's super interesting, and there is data that supports the idea that when a user knows it’s AI, user engagement really drops. In my experience working on building chatbots in both corporate and startup environments, I’ve noticed that many times, when users engage with the chatbot, it performs well and acts as a human would. However, the moment it states, "I am going to help you solve this issue," users tend to disengage.
This point is very important because, despite our embrace of the virtual space and AI, people still feel that Missing Link—the human connection. They want to feel that connection with a person who can provide a safe space. We must embrace technology, especially as we encourage kids to pursue STEM courses, as not everyone will be able to do so. Consequently, we will still face a significant human deficit, and technology is what we need to embrace. Instead of debating technology, we should focus on how we can make technology work for us.
Thank you so much, Rafael, for your comment. Now, I would like to ask Olga how she sees it from the wellness side.
Building on what Yor said, I agree that prompts are not enough, no matter how complex they are. We need more complex and strategic thinking inside AI. I have encountered simple chatbots that can answer questions and even show empathy, but they do not create relationships. I wouldn’t return to AI because I have built something that focuses on the human aspect, particularly in therapy, counseling, and skill-building, such as nonviolent communication skills. These skills help resolve conflicts, and even though technology can facilitate learning, children still need teachers.
Technology and AI can support the processes of learning, therapy, or conflict resolution by providing a bigger picture. When you mentioned that AI has vast knowledge of psychology, including all the books and research, I see it as an advantage. However, the disadvantage lies in how to choose from this vast knowledge. People face dilemmas every day about how to make choices, and each therapist or trainer must make informed decisions. Therefore, it is necessary to have a foundational base for AI to make these choices.
What you mentioned, Yor, about seeing the person communicating with AI is crucial. AI should not make choices randomly; it should consider the individual’s context. AI could support professionals, such as trainers or therapists, by providing clues, which I think is also possible.
I am quite confused now after hearing what Rafael shared about people dropping off when they know it is AI. I believe that, while I might be wrong, AI can provide a sense of safety for users in certain circumstances. Users may feel assured that they will not be judged, as AI would not judge like other people do.
AI can be a safe space for self-expression, offering support when humans can't be there, but will people embrace it if they know it's not human?
AI has the potential to make choices that are not random, and it might support various professionals, such as therapists, by providing clues to enhance their work. I believe that using AI for professionals rather than for users is a viable option. However, I found myself quite confused when I heard Rafael mention that people tend to drop off when they realize they are interacting with AI.
For me, AI can provide a sense of safety for users in certain circumstances. Users may feel assured that they will not be judged by AI, as it does not possess the same tendencies that humans do, such as rejecting or shaming individuals. AI can offer a space where users can express themselves without the fear of their information being used for someone else's benefit. I see AI as a tool that can fill gaps in human interaction, especially in situations where humans are limited.
Nonetheless, I remain uncertain about how users can truly engage with AI when they know it is not a human being. We need something that is often referred to as presence or non-verbal empathy, which is typically communicated through body language and the subtle dynamics of in-person interactions. Despite this, AI can still serve as a constant space for users to express themselves. For instance, a user can take their phone, type, or record themselves, ensuring that they are listened to and seen at the exact moment they need it, especially when other people are unavailable.
Moreover, I am contemplating how skill-building can be significantly enhanced with AI. Nonviolent communication is a structured tool that requires learning how to use it effectively. In conflict resolution, it can provide immediate assistance. It functions like a gym, helping users understand themselves and learn how to communicate without hurting others. Many people are unaware that it is possible to handle conflicts differently than through shouting or shutting down.
I had a wise thought I wanted to share, but I will save that for later. Thank you so much, Olga, for this first part of the presentation, which provided a general overview of our topic. Now, we will transition into the second part of our discussion, which will focus on questions. Each of our speakers today has the right to ask one question to any of the other speakers. If time permits, we can ask more questions, but let's start with one from each of us.
You can ask your question to any of the other speakers. Let's proceed in the same order. I have some hypotheses regarding the current reluctance of people to engage with AI chatbots. This reluctance is particularly evident when comparing the experience of speaking with a real psychologist. What do you think would happen if an AI bot exhibited behavior more akin to a real therapist? Would that change people's willingness to engage, or would they still prefer not to speak with an AI?
Who would you like to ask this question to? I'm not sure who would be the best choice to answer, but let's see who wants to start. Ra, you were first; please go ahead. My primary concern is about who will be responsible for developing this AI. While it has the potential to help people, it also carries risks that need to be addressed.
Trust in AI for therapy hinges on data privacy and the human touch; without it, the conversation may never truly begin.
Hypothesis: Right now, people are not ready to speak with AI chatbots because it's really not a very good experience. Especially if you compare it with how people interact with real psychologists, what do you think would happen if an AI bot had behavior more like a real therapist? Would it change something, or do people anyway not want to speak with AI?
Who do you want to ask? I'm not sure who would want to answer, but it’s better to choose a speaker. Okay, who wants to start? I really like this.
Alright, Ra, you were first. Please start. My first concern is who will be the ones developing this AI. On one hand, this technology can help people, but it can also be very manipulative. If we create an AI that can act like a therapist and make you open up to them, what happens with the answers? Who keeps this data? This is a very big point that we must consider anytime we are designing AI systems. Unfortunately, the idea that we can trust an AI might not be so true after all. For instance, ChatGPT uses parts of our conversations to retrain their models. We know that ChatGPT is a big company with its own data security and privacy protocols, but what happens when a company that we don’t know, or one that doesn’t have this kind of data privacy, suddenly has access to our whole life story?
So, this is my answer: yes, we can input AI to act like a human, talk like a human, and look similar to a human. However, our responsibility towards the use of this AI must be significant, and we really must have a lot of regulations on how this AI will be developed and what will happen to the user's data.
*Great point there. For me, people come to therapists because of confidentiality, knowing fully that their information is safe. If you have a therapist or a physician, they are bound by ethical codes that state even after the demise of a client or patient, they cannot disclose anything, except maybe in some legal situations. Confidentiality is very key. When using AI like ChatGPT, you find that sometimes it can hallucinate or bring in other things that may not really fit into what you want. To ensure that you are not like all AI, you want to humanize it and paraphrase. Let’s consider a human being who is not an experimental genius, who is alive and well, and you are doing your sessions with them. There’s a need for therapists to send patients for psychiatric evaluations when necessary. In Africa, psychologists cannot prescribe medication; they are still in that phase. We know that in some countries, psychologists can prescribe, but they will then send patients to psychiatrists for evaluations and prescriptions.
Just thinking about it, if someone is chatting with an AI and actually needs medication, how do we get to the stage where the AI can say, “Look, you need this type of medication”?
These are some of my reservations regarding how we can minimize hallucinations so that the person doesn’t take every word literally. Also, how do we ensure that we are comfortable? For me, I don’t think being comfortable with AI would be an issue. Let’s leave the Gen X and Baby Boomers aside; Millennials have changed the narrative when it comes to therapy. They are the therapy generation, and Gen Z is following suit. They would rather want to be wherever they are, click on a portal, and receive their therapy, whether it is AI-driven or not. However, for the older generation, they may not really embrace that because they might wonder, “What is this all about?”
The therapy landscape is evolving, with Millennials and Gen Z embracing AI-driven solutions, but concerns about data privacy and confidentiality remain paramount.
The discussion revolves around several reservations regarding the use of AI in therapy, particularly concerning the right medication and minimizing hallucinations. There is a concern about how individuals interpret the information provided by AI and ensuring their comfort with it. For me, I don't think being comfortable with AI would be an issue.
When we set aside the Baby Boomers and Generation X, it is clear that Millennials have changed the narrative surrounding therapy. They are the therapy generation, and Generation Z has followed suit. These younger generations prefer to access therapy digitally, often through platforms that may or may not be AI-driven. In contrast, older generations may struggle to embrace this technology, questioning its validity and effectiveness.
Interestingly, 33% of Generation Z gets their health results from TikTok. This platform, often viewed as a space for entertainment and challenges, has become a source of information for many. However, it is crucial to recognize that what works for one person may not work for another, and we know it's not a one-size-fits-all approach.
While I am not worried about Millennials and Generation Z embracing chatbots and AI-driven therapy, my significant concern lies with data confidentiality. As highlighted during the COVID-19 pandemic, when telehealth became prevalent, there were widespread concerns about data encryption and privacy. Instances arose where patients reported breaches of confidentiality regarding their information, which raises valid concerns about data protection and confidentiality codes.
Moving on, Olga was invited to share her vision but chose to skip that opportunity. The next question was posed by Rafael, who expressed appreciation for the format of open questions. He inquired about the potential for using AI to assist psychologists rather than patients, suggesting that this could provide valuable data to enhance therapy.
I can definitely see how AI can work alongside therapists or skill trainers in the field of psychology. The subjective nature of psychology could benefit from data-driven insights that are not immediately apparent. AI could serve as a tool that builds trust between professionals and clients, ensuring that the information remains confidential and ethically managed.
In this context, a professional could leverage AI to access knowledge that is otherwise inaccessible, allowing for more informed therapy sessions. This could involve analyzing information that AI gathers and presenting it to the professional to enhance their understanding and approach to therapy.
In response to Rafael's question, I emphasized that we can't even do without technology right now. AI simplifies many processes, acting as a repository of knowledge akin to having read all the textbooks and conducted extensive research. As a professional, I can access this information without needing to visit libraries or bookstores worldwide. The challenge then becomes how to tailor this vast knowledge to the individual I am conversing with, ensuring that AI is utilized effectively in the therapeutic context.
AI is revolutionizing access to knowledge, but we must ensure it's personalized and context-aware to truly benefit individuals.
Thank you for the question. In response to Ra's inquiry, I would say that we can't even function without technology right now. AI makes life easier; it's as if you've read all the books and textbooks and conducted vast research. For me, as a professional, I no longer have to go to the library in London or visit a bookstore in New York. With AI, everything is consolidated in one place, helping me stay informed about various happenings around the world.
However, the challenge lies in how to narrow this vast information down to the individual I am conversing with. We need to ensure that AI can deliver personalized knowledge rather than a one-size-fits-all approach. This is where some people might disengage, as they may feel overwhelmed by AI. While AI can assist technologists, it is also crucial to involve professionals in the process. The reality is that if we are going to use AI to build a platform, professionals are needed to input the correct information. For instance, the context of what is happening in Chicago is different from that in England or Lagos, Nigeria. Thus, we must find a way to gather all this information to ensure that AI can provide tailored therapy to individuals based on their specific geographical and cultural contexts.
In my view, the context of what you deliver is critical. It is not just about knowledge; it is about how that knowledge is applied in therapy or counseling. With AI, having the right context in delivery can resolve many issues.
Additionally, I would like to address another point. We have literally billions of people who do not have access to psychological support. While creating products that assist psychologists is valuable, it does not resolve the main problem of accessibility to mental health services.
Thank you, Rafel, for your question. Dr. Mauna, which speaker would you like to ask a question, and what will it be?
I believe I have been asking questions along these lines. As an AI expert, I think you are in the best position to address concerns regarding data protection and confidentiality. When will we reach a level where we can ensure that data is secure? I know colleagues who have faced legal issues due to the use of platforms like Zoom during the pandemic. We have also observed that people feel more comfortable receiving therapy via WhatsApp when assured that their data is protected through end-to-end encryption. However, there are still concerns about data security.
I would like us to discuss data protection in the context of AI, especially when individuals share sensitive information. If anything goes wrong, it raises significant concerns. You are absolutely right; this is a very important question. Currently, we are exploring technical solutions to secure data. However, it is essential to address how companies manage and protect this data.
Moreover, we may soon see interesting developments where individuals can access services by providing their data instead of paying with money. For many people, this could be a viable option. While we do not know if this is the definitive direction for the future, it is certainly a possibility worth considering.
In a world where data is currency, safeguarding our privacy is not just a choice—it's a necessity.
To address the concerns regarding data privacy, it's crucial to recognize that we are currently feeding companies with vast amounts of information. If anything were to go wrong, you're absolutely right; this is a very important question. Right now, I am speaking about some technical solutions, and after implementing these solutions, we often ask companies how they secure their data.
At the same time, I believe we will soon see some different and interesting developments. For example, some people may choose to take services without paying money, opting instead to exchange their data. For many individuals, this could be a good option. However, we cannot definitively say whether this is the direction we are heading in. Regardless, I agree that we need to be very careful about our data and the data of any company we work with as clients.
Currently, I do not have the best solution; in fact, I think nobody does, as the landscape is changing very rapidly. I hope that those who are working on these issues have enough responsibility to be concerned about privacy. I would like to ask Rafael if he would like to comment on this as a technical AI expert.
Rafael responds: One key aspect when discussing user data is the importance of making it anonymous. We should never save identifiable data about users, meaning that ideally, we would never know who the user is in the end. However, there are issues we face today, such as the prevalence of third-party cookies. I'm not sure if everyone is aware, but every time you enter a website, it saves some data on your computer about who you are and what you did.
Currently, there is a strong trend toward removing these third-party cookies, which serve as identifiers. With these cookies, and with good algorithms and knowledge, it is possible to track a user’s profile, including their age, gender, and behavior. Nevertheless, we see a depreciation of this type of data being saved.
When building AI systems, we need to adopt the same mentality of not saving personal data. We should only keep data relevant to the conversation, meaning that the system might know that a user has a specific problem, but it should never have access to the user's identity. However, I am uncertain about how this can be achieved, especially considering that machine learning models might still track users through other social media platforms, even if they do not explicitly reveal their identity.
This discussion should not only involve people in technology or mental health but also governments and major legislators. If we continue with the same mechanisms, like when you think about buying a new pair of Adidas shoes and suddenly see ads on Instagram or Facebook, we are accessing a level of personal data that can predict much more about individuals, which can become dangerous.
There are surface-level measures we can implement, such as making all data anonymous and not saving it, while using usernames as many companies already do. However, this needs to be part of a much larger and more significant discussion—not only about the implementation of AI in psychology or patient treatment but also in the overall development of AI itself. With the current data available, AI can already predict a lot about individuals.
As leaders in the tech and mental health space, we need to be vigilant and proactive in advocating for safety measures. Thank you, Rafael, for your insights.
Now, let's move on to the question from Olga. She asks, "I wonder if it's possible to train AI in metaphorical thinking." She notes that in empathy, it’s essential not just to say words but to show understanding. Metaphorical thinking could be vital in activating the part of a user where emotions are stored.
AI can mimic empathy, but true understanding is a human trait we must protect and enhance.
In today's discussion, we emphasized the importance of being vigilant and acting as activists in the tech and mental health space. Leaders in these fields must push the safety barriers to ensure a secure environment for all. Thank you, Rafael and Myuna, for your insights.
Next, Olga posed a question regarding the potential of training AI in metaphorical thinking. She noted that in the realm of empathy, it is crucial not only to express words but also to demonstrate understanding. This understanding can be activated through metaphorical thinking, which may involve guesses or imagery. Olga pointed out that currently, there seems to be a separation in the market between chat interfaces and image or video creations. She wonders if existing technologies could be utilized for empathy in therapy and counseling, enhancing emotional understanding rather than just intellectual engagement.
Olga continued by acknowledging the power of metaphors in therapy and expressed concern about the limitations of current large language models (LLMs) in managing metaphorical responses effectively. She questioned whether LLMs could truly understand people, suggesting that while they may not possess genuine understanding, they can simulate it convincingly.
Another participant shared a similar yet slightly different perspective, stating that while AI can generate metaphors, it operates merely as a prediction algorithm. They emphasized that AI can create the illusion of understanding but lacks true consciousness. This raises the question of whether AI can ever be conscious, to which the participant responded that while it may not be conscious, it can convincingly pretend to be.
As the session drew to a close, participants expressed gratitude for the engaging conversation. One participant remarked on the powerful leaders present and appreciated the opportunity to learn from such insightful discussions. They highlighted the ongoing process of developing AI into a more empathetic and conscious tool for both patients and professionals.
Another participant reflected on the insights gained regarding the delivery of therapy through AI, emphasizing the need for improvements to ensure users remain engaged. They acknowledged the human resource challenges in therapy and recognized that this journey is a process rather than a single event.
Olga concluded by expressing her gratitude for the fruitful discussion and her hope that the collective efforts would contribute to building systems that promote peace and unity among humans.
The session wrapped up with a brief overview of the Mental Growth Network, which focuses on the intersection of business, wellness, medical health, and technology. Participants were encouraged to stay connected through their LinkedIn group and to access recordings of the talks on their YouTube channel. Thank you to everyone for participating, and we look forward to seeing you at our next online talk.