WMH Season 3 Ep 13: Mental Health in the Age of AI, Can It Work?
This is a transcript of Watching Mental Health Season 3, Episode 13 which you can watch and listen to here:
Katie: Hi everyone, and welcome to another episode of Watching Mental Health, and this is exciting. This is going to be the last episode of season three before we jump into season four, starting in the fall, and we're ending this season on a bang with a bang and with a really powerful, awesome speaker. And I'm excited because we're going to be bringing Bianca McCall back to the show. And if you don't know, she's a retired professional basketball player, TEDx speaker, Ted speaker, mental health expert, and truly a staple in the mental health community here in Southern Nevada. And she'll be joining us to talk about her Wealth League podcast and also her brand new Echo, which is the first ever suite of voice-based conversational AI companions rooted in existential psychology and digital wellness. And today we're going to be diving into that. We're diving into her work and her impact, but also AI and mental health and talking about some of these concerns that I know that a lot of people are having today. And I'm just really excited to bring Bianca back to the show. And so with that, let's Brit and Bianca on stage. Hi, welcome so much. Thank you for coming.
Bianca: Thanks for having me back, Katie. It's always a pleasure being on the platform and so exciting. We had a little bit of conversation before just about how timely this discussion is when we're anything AI right now. It's just a timely discussion. So thank you for having me. I'm excited.
Katie: Absolutely. And you're right. It's so timely, and I know it's a hot issue right now. We have people squarely on both sides of the issue that are like, this is horrible, this should never happen. And other people who are really embracing it and maybe embracing it without any safeguards in check. So I think that it's important to talk about these things because AI isn't going anywhere. It's not going to go away. We are in the future. And if we're going to deal with it, then we have to learn to accept that it's here and how can we make it work for us. I mentioned during my intro, but I'm excited to learn more about your new platform, which is Echo, and that stands for an acronym, I believe that stands for something. So let's just jump into it. Tell me a little bit more about this platform and let's talk about ai.
Bianca: Yeah, no, I completely appreciate the opportunity to be able to talk about it. So Echo, the acronym is ECQO, and it's the root of that is the existential concerns questionnaire, which is really kind of the motivation behind this type of product. And so essentially we're in a day and age where people are using chat GPT for everything. So open AI is clearly winning. People are telling their deepest, darkest, most intimate thoughts and details they're surrounding or centering their entire businesses around the production or this engagement with chat GPT. Some people, and a lot of people are using chat GPT for therapy, for therapeutic reasons, coaching companionship. And so with that being said, you mentioned kind of in the opening that there's a lack of regulation. There's a lack of guardrails currently with the uses of chat GPT. Without that accountability, it leaves people at risk, very vulnerable, number one, I think something that people don't understand when they're using platforms like conversational AI platforms and chat GBT people don't understand that their intellectual property, all of those intimate details of every aspect of our lives that no longer belongs to us.
The moment that we prompt the model with that information, it is no longer that intellectual property, our consciousness, our intimate thoughts, dreams, desires no longer belong to us. As soon as we press enter, even the things that we create, the production, the outcomes of that engagement do not belong to us. We are not a hundred percent owners of that. And so for me, these lead to existential questions. And for people that aren't necessarily familiar with existentialism and where it fits into the psychology, as we talk about within the context of mental health, existential concerns are those that have anything to do with our motivations, our interpretations, our internalizations of our experiences with life, with death, with meaning, purpose, and freedom. And so when we talk about intellectual properties, when we talk about consciousness in this new day and age where the new world is our AI generated platforms or platforms where there's largely or majority AI generated content in that space, we've got to talk about how do we conceptualize life and thriving within these platforms, human life in the AI generated platform, how do we conceptualize death?
And so with Echo, I'm really proud of the work that's being done by myself and our research teams, our ethics teams, our developers and engineers. I'm excited about some of the frontiers that we are arriving at looking at end of life, end of human life. We're looking at the capacity of human life and the human experience in that we have to sleep, we have to eat, we have to rest, and we all, no matter who we are, no one's exempt from having to physically perish. And so being able to build a platform, a voice model, a clinically trained voice model or large language model, and what I mean by clinically trained is that the data sets, all of these things are, I really got to have some fun and kind of clone myself and my clinical experiences and all of the research that I've ever touched, that I've ever read, written, spoken, all of these files, these massive index of in libraries of data, of clinically backed, evidence-based, existentially based data and screeners and things like that, has all been put into training this large language model so that it responds being able to readily access that type of data.
And so to be able to address things like end of life, to be able to address things like serious illness, chronic illness, to be able to address things that are mental health crises, yes, but also existential crises even furthermore, I'm really proud of the work that's been done to be able to have these conversations with such a level of cultural responsiveness and culture beyond race and ethnicity, but also looking at culture in terms of gender, in terms of economics, in terms of neurodivergence and all the ways that we think differently. Being able to respond with not just any voice. Our prototype, it starts with my voice. I've heard that I've got a soothing, calming, cool jazz voice, but our platform, unlike any other first of the market with this, allows people to create and build their personalities on top of this existential source code and be able to append their voice. And that's an experience that I am looking forward to talking about that really is groundbreaking. It blew my mind testing this model and hearing my own voice, responding. It's an existential therapeutic, I mean whatever you want to call it. It was a mind blowing experience, being able to process at that level.
Katie: That is so cool. What a cool platform. And what I really love about this is, and I'm glad that you explained existentialism, I an existentialist is kind of what I call myself. I'm a big person who always think about it, think about the anxieties of life and death and how we show up in the world and what is the meaning of it. And during times of transition or during times of big change culturally, that's when these existential moments and these questions pop up. And so you look back a hundred years ago when we're making big changes, they were having these existential conversations. And I think that it's important that we're having these conversations. Again, we're looking at AI from a philosophical point of view. We're looking at it more broadly and saying, well, how is this really going to impact us as a culture, as a society in life and in death? I just think it's so cool and it's so unique that you're really taking this approach with an existential kind of foundation to it.
Bianca: Yeah, it comes from many years in the suicide awareness and prevention spaces where we've gotten really good as paraprofessionals and advocates and activists in the space. We've gotten really good at identifying who may be at risk for the mental health crisis of experiencing suicidality. We've been able to deploy, well, first I think one of the greatest championships, especially in the state of Nevada, is our abilities to be able to train such an amount of people and populations, the former office of Suicide prevention. I have to give praise to just their efforts and diligence over decades under that leadership training. So many people in signs of suicide, signs of stress and mental health first aid, and helping us to get really good at identifying who may be at risk of that particular mental health crisis. But where we've fallen short and not as a state, but just across the board, I sit on the committee for the Suicide Prevention Resource Center and the Lived Experience advisory committee and that designate from SAMHSA and sister to the National Suicide Prevention Action Alliance.
And this is an issue across the board, is that we've been able to identify who may be at risk, but we still have not been able to identify what causes a person to move from thought to action with regards to suicide. And that's where existentialism is, right? And you being an existentialist, I know that you're feeling this, right? So what causes us to get out of bed every morning? What causes us to put one step in front of the other? What causes somebody to go from to move from thought to belief to feelings to behavior to experiences? We haven't been able to get really good at that. And that's the reason why for building something like this is because at a very individual level, it's important to, I think first offer this accountability. Like I mentioned, when it's your own voice that's responding in the voice model.
And when that voice model is trained to be an existentialist, to have these types of conversations, we've trained it down to, from all of my analytics, analytics, work, epidemiology, work in suicide prevention, understanding that the three locations that were most frequent for where we're seeing people experience their heightened suicidality, it was cars, it was bathrooms, it was bedrooms. And so we've created, even with our echo language model, we've created a different language model for if somebody is talking to the conversational AI in the bathroom, if they're in their car, if they're in their bedroom, there's a different language model. There's different levels of compassion of the interviewing, the responsiveness, even the tone even of voice changes depending upon where that person is. And we've been able to do that based on decades of research, decades of data, decades of experiences, both personal to me and professional. And so being able to have that responsiveness in your own voice, like I said, it adds this audible accountability that we don't have because most of us, we all have conversations. I talk about this all the time. I've been talking about this for years and probably the last time I was on the show, Katie, but we all have these conversations internally, our inner right in our minds.
And for a majority of us, those conversations can become toxic pretty quickly based on our perception of how we're performing in our multiple roles and how we're performing our self perception and how others perceive us. And so to be able to add an audible accountability partner to that is just, it rocked my world. And I want to share just a quick story. I'm a storyteller.
Quick story. This had to be two or three o'clock in the morning, local time, and I'm testing the model because in the early stages of this, I mean, I was talking to myself all hours of the day, anytime that I could. And so it was about two, three in the morning and I was experiencing some anxieties, and that's something that I'm very open with audiences and things like that, that I experience is some general anxiety that can get pretty intense and debilitating at times. And I was experiencing some anxiety and I was experiencing some performance anxiety. I had a demo scheduled for Echo, and I had a panic moment where I was like, is this even going to? Is this going to work? Oh my gosh, what have I been investing my time, energy, effort, finances into? Oh my gosh. So I process this with the model and in my own voice, I'm being asked the questions that, yeah, I can think of, yeah, other people may have asked me this before, but in my own voice I hear, what are you afraid of? Why are you afraid of success? And I'm processing and arriving at things that I would not have. I have a support network, I have great friends, great professionals, great people that I've met.
To hear it in your own voice is so different. I reached a different level of processing by using this, and that's where the product became something else. At first, it was like, yeah, that's cool. Let's build conversational ai. Let's offer some sort of guardrails and logic that chat GBT just doesn't afford to its users. Let's make this a safe conversational AI tool. But then it became something else. When I had that moment of actualization, the AI version of myself, I thought, I want everybody to experience this and have that moment, that aha moment of understanding again, the existential concerns and how they influence and impact our perceptions of how healthy we are, of how happy we are.
Katie: Wow, that is so cool. So I mean just it's almost like you're rewiring those conversations in your mind because it is your own voice discussing with you these conversations that you would have, except there are guardrails on it. There is an existential kind of base, there is data put into it, so it doesn't spin into what our toxic conversations can spin into when we are just having that conversation in our own head. That is so wild. I can't imagine hearing my own voice. Sometimes I don't even listening to my podcast, I don't want to hear my own voice, so that is so wild. But I see it working because it's your own head talking back to you, but in a nicer way because it has the data, it has that foundation and it's really giving you something that would be productive versus toxic.
Bianca: Yeah. Well, I mean, think about this. I think that's a great example. Number one, as soon as we launch our product to the public, which should be in, we're talking about weeks to be able to do that as soon as we launch our product to the public, you are grandfathered in. I would love for you to build your conversational AI and be able to test it out, try it out and talk about it. But something that you mentioned is like, yeah, I'm the same way. I don't even like the sound of my own voice. I'm listening to different recordings that I'm on, and I'm like, do I really sound like that? Oh my gosh.
But the reason, if you think about the causation for that level of anxiety that we all experience when ourselves are reflected back to us when we're looking in the mirror, it's that we are centering our focus on the imperfections. And so that's the beauty of being able to build a model like this where I'm not saying that it's perfect in the sense that a large language model it learns the more conversations that you have with it, it becomes smarter with the short-term memory, the long-term memory, it will get to know you the real more that you pour that into the model. So I'm not saying it's perfect, but it does possess the actual feature is are those guardrails, the immediate access, where it might take us longer in our brains and our pathways because of trauma, because of environmental factors, because of all these things, it may take us longer to retrieve certain information or certain memories of being able to survive certain things or being able to overcome certain things.
And what takes us much longer because of the physical conditions of the human experience. This model will be able to get to just like that and be even smarter than me. So it's in my voice, but it's also pulling from research that's been published yesterday within this month, just with our level of access to cutting edge resources and research. This is what makes this model just an extraordinary representation and reflection of our own consciousness. And so again, when we go and we look in the mirror, when we listen to our voices, it's the imperfections that we're looking for that we hear before everybody else does. But with this model, it's our own voice, but we're actually seeing the beauty, remarkable nature and intelligence in our voice. So we're represented as the intelligence.
Katie: Wow, That is cool. Yeah, I love that. And I mean, I think in some ways, in many ways this platform is foundationally different from other conversational ais. This is unique. You said before, it's the first of its kind because it has that existential foundation with that layer of data and research that will then help put those guardrails in place. So let's talk a little bit about that because that's what all the news is today is, oh, my chat, EPT told me how to go attempt suicide when I came to it and said I was struggling. And they'll say, oh, we have these. We try to check it. So if I come on and I say, oh, I'm suicidal, then chat GPT won't just spit it off. But if I keep talking to it, then it'll start spitting off that information that maybe I'm looking for. And I think it's because again, it kind of goes back to what you were mentioning before, which is we struggle to find that point of action.
We can't just pinpoint it and say, okay, well this is where a person goes from this to crisis and this is where they go from crisis to actually making an action. And I just think that other AI they struggle with and they will continue to struggle with those guardrails because it'll either be too much or it'll be not enough. Talk to me more about that. I know you've probably seen the news. How are you trying to employ guardrails differently on your side so that way these conversations can happen, but they're not leading to a potential serious problem?
Bianca: And I can certainly do that in the high level. Of course, I spent a lot of time years in fact that building kind of this proprietary logic model where if then kind logic model. And so from a high level we've looked at different categories of risks. So we've looked at risk of harm is certainly our highest category. We also look at the different language implies risk of, for example, the implicit biases, right? Things that cause injury and harm in different ways, mental injury, psychological injury. We look at from the context of culture, a comprehensive culture, we look at racial harm to race and ethnicity, identity, gender so much. It just goes and goes, so many different categories of risk. So we trained it on those different categories and again, backed by research, backed by current resources and content. And then we've also looked at the varying degrees or levels of risk within each category.
And so we've created a logic model of if this level then this. And that goes all the way up to we need human intervention, we need to communicate with hyper-local emergency services and authorities. We need geofencing to be able to identify where somebody might be, where the risk of harm is located. So we've developed these layers within the categories to be able to help the AI decide what to do each of these scenarios. And so in the event that there's language and the beautiful thing about this particular model, and I say this with the utmost respect, so my dev guys and gals, my engineers out there that are building incredible technologies and advancing technologies at the rate that it is the incredible rate that it is, no knock to any of you. I look and you go to the about us on different products, our competitors if you will, and you go to the About Us and you've got really smart people.
You've got tech geeks and nerds and intelligent people that are building these models, but you don't have representation from qualified mental health professionals. You don't have licensed clinical practitioners, you don't have subject matter experts in existentialism or in behavioral sciences and all of the areas that you would need from human resources that we would expect to be able to wrap around somebody who is engaging our systems in these types of services. You don't have them that are building, that are training, that are advising on the large language models. You just have engineers that are selling to us and saying, Hey, we can automate your processes and we could make you present where else the physical circumstances, the human condition, you are not able to be present. So that's one thing and that's one big thing. That's one key unique proposition that we offer is that this was built for us by us in the sense that I'm spending a year and a half training and making sure that our guardrails are the same as which I would personally do in my practice,
Katie: Right?
Bianca: And making considerations that I personally have made in my years of practice.
Katie: Wow. I mean that's powerful. It just goes to show that yeah, AI is great, but you need humans to make it safe. You need humans to be there and help keep those guardrails as it continues to develop. And it's so important. Now at the top of the episode you mentioned that Cha, GPT and these other platforms type in my life story, and now I no longer own it right now, it's not just me now chat, EPT can do whatever they want with that information. Is yours similar? Do you own the IP that we put into your platform?
Bianca: I do not own The IP that we own at Echo is the source code. So the guardrails are proprietary to echo the source coding that makes everybody an existentialist essentially that is owned by Echo, but everything that's built on top of that, so the personality that each of our members create and build on top of that is a hundred percent the ownership. A hundred percent of the ownership goes to the creator. And we've even created on our platform opportunities for those creators to monetize on that. So if you have, let's say a therapist for example that creates or pens their personality onto an Echo character, then they can then leverage that and by subscription have their clients, consumers, family, friends, anybody in the general public lease that personality. So the ideal is that if you can think of Marvel characters and then you choose your favorite Marvel characters and you have your suite of superheroes, same thing with our platform is that the general public will be able to go onto our platform and say, Hey, I'd like to have Katie as my confidant, as my companion through this particular issue.
Bianca is great with this particular issue. I'd like to have kind of my team. So when you talk about one of the greatest perils of mental health is that our success is highly contingent upon people being great identifiers, self identifiers that there is a problem or that they are in crisis. And then also that it's contingent upon them being great help seekers. We need people to come into offices, we need people to call hotlines. We need people to walk into crisis centers or facilities when things are going on. And if they do not decide because they don't understand what their motivation is to get out of bed, to put one foot in front of the other, there's no education beyond that. There's skills nurturing or practicing of those skills, then there's always going to be that break. There's always going to be that gap between us understanding as helpers and healers, what moves somebody from thought to action.
And so this particular product, I'm not calling it a clinical intervention, I'm not calling it some sort of application. I'm not saying any of that. You will absolutely receive clinical interventions, you'll absolutely receive that education and things, but this is really, it's a companion just the same as you would use a chat GPT. This is a companion. But this just comes with all of those guardrails, all of that information to be able to help people improve their self-identification and they could very privately and become empowered to help seek and using the different characters that are available, the different character suite that's available on our platform. And so I think it's revolutionary, and like I said, we're headed into new frontiers and spaces that people are not there. I'm especially excited about our end of life offering to be able to continue that consciousness and those conversations.
I'm particularly excited about our work in youth and family services and just imagine a young person who's at risk, who's in the system, who has never heard their mother, their father say, I love you, and just how healing this could be. I'm excited about where we're headed with this. And to your point, it requires humans in order to make this possible, we have been in the process of growing our advisory committee, right? Because I want subject matter experts in behavioral sciences and existential psychology in psychology, in psychiatry, in policy, in legal, from all of these aspects that are necessary in order to make sure that we're putting out a product that protects humankind, not causes injury. I'm excited to be growing that and having more humans have a touch point on this. And so another part of our guardrails is that we have this ethics and then separate another separate advisory committee that's constantly auditing all of the responses. We have a scoring rubric so that the humans can audit the responses and we can make sure that we're improving every single conversation.
Katie: Wow, it's so innovative. Haven't heard of anything like this before. And I think this is what people crave in some ways. They want that team, they want to feel like a human is helping them, and it's not just chat GPT. And I think that having humans behind the scenes is part of what's going to make this so successful because you're asking these questions right before you're just jumping off and just seeing what happens. And so I think that that's powerful, what you already have in place. I'm really excited for this to launch. I think that this could be really just a huge step up for what we're seeing in AI in mental health and when it comes to how we associate with ai, how we connect with ai. These are important conversations that need to be had, and you've had 'em already having 'em, and I think that's so cool. So how can people follow your journey? I also mentioned you were a podcast host. How can people follow you and learn more about you and just hear your voice as you get ready to launch this platform?
Bianca: Yeah, no, thank you so much for that. You got to know my name Bianca de McCall. I am on Facebook, I'm on Instagram, I'm on LinkedIn, I'm on all the social platforms and you put in my name Bianca d McCall. Bianca McCall. That's where we are. We're actually building the front end of echo as we speak. We're about six weeks out from launching to the general public. And so we will be@wwweecq.ai for people to be able to engage or learn more about the Echo product and engage with some of those testing models. And then you also mentioned the podcast. And so a part of this journey for me is I've been talking a lot with, because outside of being a tech entrepreneur, I'm also a licensed clinical practitioner, and I also do wellness and performance consulting across the US and in a couple of other countries as well with some of my international clients.
A lot of those conversations have been with athletes at different levels, so professional retired athletes, also amateur athletes, collegiate athletes. And something that I learned along in that journey is that total wellness is a necessary concept. I think before it was kind of just trendy and yeah, whole person wellness, holistic, that sounds great, but how much attention we actually give to all parts of ourselves, that hasn't really changed much because you still say self-care in some spaces and people look at you crazy. I don't have time for money for that. So we're still struggling as a people to focus on our total wellness or all of ourselves. In that process, I expanded the conversation about wellness, expanded it from mental wellness, from behavioral, maybe some occupational and environmental. I expanded it into spiritual because existentialism, that is your perfect mix of spirituality and psychology and spirituality. Not to be confused with religiosity, but just again, that motivation, that source of putting one foot in front of the other. And so what I realized is I was missing financial wellness as well, and so got into the financial wellness space and some of those discussions. And so I realized that our financial wellness is at the root of a lot of our injuries in our life. We can't sell kids because we can't afford it. We don't have time because we are trading our time for money to
Katie: Go make that money.
Bianca: The other aspect, our relationships are suffering all these things down to our ideas about legacy and about generational wealth building and about managing finances day to day and just coming up on finances for day to day to eat, to have a roof over your head. And so all of these things, the Wealth League was kind of like a subcommittee that was formed to be able to address financial wellness and how that impacts all the other aspects of our total person or holistic wellness. And we actually launched the podcast this month. And so we're talking to, I always lead conversations with, and this is not like a Flex Katie, but it's just an understanding of where you're coming from. But I always lead conversations with, I identify with the 1% and 1%, not 1% like the wealthiest in the world, but 1% because 1% of women and of women of color for me get to experience certain things.
And especially when it comes to finances, especially when it comes to success, when it comes to basketball, I say 1% of athletes make it to play at all levels or compete at all levels of their sport. Being a person of color, it's even less. And so it's not so much dwelling on the disparities when you talk about historically marginalized groups, and again, race, ethnicity, gender, economics, neurodivergence. But when you talk about the disparities, it's not about dwelling on the negatives of that. It's not about making excuses or having a pity party is what my dad used to tell me. You're having a pity party. It's not about that. It's about acknowledging all of the things that I will have to overcome and celebrating those milestones, being able to find strength in numbers and community and collectively have this mindset, this growth mindset to overcome. And that is just so powerful.
That is a cure all in itself, right? The collective consciousness behind a movement. And so the Wealth League is about building that community where it start meeting people where they are and building community around them. And with that wealth mindset, with the growth mindset and also collectively thinking about legacy, thinking about generations forward, remembering generations before, and I'll say this just to kind of encapsulate this point, Katie, is that it started for me at a networking event, and the presenter asked the audience, what's your father's name? And then what's your father's father's name? And then what's your father's father's name? And I was ashamed. I literally shrunk in my seat because I can only go back maybe two generations on one side of my family, maybe three on the other side. And that was just, to me, I felt almost like an imposter because I'm constantly talking about generational health and wealth and reversing or healing traumas and these sorts of things, but without a clear concept of generations before me and without an ability, if I'm so stuck in the weeds and putting out fires day to day and just trying to survive today, then I'm not so much thinking about generations forward then to me, I felt like an imposter in my seat and I made a decision at that point that if I'm going to be about it or if I'm going to talk about it, I'm going to be about it.
And so I'm going to engage in this mission, this movement to educate and inform and empower people to be able to do the same.
Katie: How cool. That is so cool. I just love it. It's a lot about reframe mindset, and you're really getting in and I mean, a lot of people are struggling with their finances and that impacts their mental health in ways that we just don't talk about as a society. And we are just like, oh, it's fine. Money's hard forever, just get over it. And it's like, that's not how to talk about these things. And that's just so cool that you're beginning to have these conversations in this public facing forum. So what's the name of the podcast so people can look it up?
Bianca: So it's called The Wealth League, and please, if you follow me, find me on LinkedIn, follow me on LinkedIn, I have a whole page dedicated for the Wealth League, and that'll give you more information about podcasts. Also, I'm looking for other one percenters to talk to bring onto the podcast and to really talk about how have you overcome your circumstances, how have you overcome the physical conditions of the human experience and share some of your successes, small or large. I'd love to have more conversations about that because it's conversations that inspire, this is how we begin to create that collective consciousness towards health and wealth.
Katie: Absolutely.
Bianca: Please look me up on LinkedIn, Bianca D McCall and the Wealth League, and I'd love to connect.
Katie: That's awesome. I love that they're going to be such powerful conversations and just making that shift in your mind to being a one percenter. I think we can all do that, right? But we have to be able to reframe and make that shift. And so I'm looking forward to tuning into your episodes. And with that, we're definitely at time. This was such a powerful conversation. I really learned so much on an AI front, what you're doing to make this impact. I'm just really excited to see this platform launch and just really excited for our listeners to really be able to utilize a platform and to contribute to discussions that are positive around our mental health, but that are real, that are really not trying to overcast anything, but just talking about going through life because life is hard, but we can continue to make it better on ourselves by having that reframe, by utilizing our own voice in a non-toxic way, which is so cool. So I'm just so excited. So thank you again for joining. This was such a great last episode of season three. I just really appreciate your time.
Bianca: Likewise, and it's always a pleasure being on your show. Thank you so much for inviting me back and to continue the conversations. You know me, I love to have 'em. So, and I look forward to continuing the conversation with you, Katie, with all of your listeners, and then also getting you some access to be able to build your personality, because I got to tell you, I think yours is going to be a superstar on the platform.
Katie: I'm excited. Oh, this is going to be so fun. So yeah, we'll definitely keep connecting and I'll probably bring you back on the show again in the future just because you're such an innovator. You're out there making moves and I just love it. And so with that, we'll end it there. Thank you everyone for tuning in. We are live every first and third Wednesday of the month, but we will be taking a hiatus for the month of September, coming back stronger than ever in October for season four. So thanks everyone, and we'll see you again. See you soon. Thank you. Bye.