Engineering with AI - Always Intoxicated

Published January 22, 2023

AI has been around for awhile but has really exploded in the last few months. We are seeing AI that creates art, code, and all forms of writing. In this episode, we discuss how AI has impacted our industry and share our thoughts on where we think it will go.

Picks

Panel

Episode transcript

Edit transcript

Ryan Burgess
Welcome to a new episode of the front end Happy Hour Podcast. I'm sure you all have been hearing a lot about AI lately. Likely you're hearing about ChatGPT or seeing AI generated art. In this episode, we're going to discuss how AI has impacted our industry or will impact our industry and share our thoughts on what we think about it. Let's go around and give introductions Stacy, you want to start it off?

Stacy London
Sure. Stacy, London principal front end engineer on Trello

Jem Young
Jem Young engineering manager at Netflix

Shirley Wu
surely Lu, former software engineer data visualization creator now human being and grad school for art.

Augustus Yuan
Gustus human software engineer at Twitch.

Ryan Burgess
And I'm Ryan Burgess, I'm a software engineering manager at Netflix. I really wish I had more creative title like surely but here we are. In each episode of the front end, happier podcast, we like to choose a keyword that if it's mentioned at all, in the episode, we will all take a drink. And what did we decide today's keyword is recommend? recommends, I recommend we all take a drink because that's likely going to be said a few times. That was a really bad joke, but hey, whatever we feel bad or to see lopped is right? Yeah, it's alright. Before we like really dive into the topic. Anyone want to define like what is AI? I mean, artificial intelligence. But you know, what does that mean?

Stacy London
What is intelligence? What is artificial? I had to look it up. Because you have like, it starts to you start to go down a philosophical rabbit hole, like what is intelligence and constitutes that? And so anyway, the the textbook was perceiving synthesizing and inferring information with but like machines are doing that as opposed to humans.

Ryan Burgess
Okay, that's, that's not bad. I always think of like, the minute you say ai, ai, I think machine learning. So that goes hand in hand with that. I think that's fair. All right. So computers are doing things for us and being smart.

Shirley Wu
Oh, I was gonna say I think there's also a very interesting discrepancy between what AI is portray, like in mainstream media, versus what it actually is what our understanding of it is in. For us, that's kind of I'm tangentially in the field. I took one. This is my disclaimer for the rest of the episode, I took one AI class in college. And I've never touched it again. And I'm just an outside observer. fascinated with what's going on. And also slightly as scared.

Ryan Burgess
I watched the Terminator movie. I don't know. I Robot. That's good, too. There you go.

Augustus Yuan
When I when I think of AI, I think of Smarter child. Do you all remember that? chat bot on AOL Instant Messenger? No, no, I remember saying Well, it's because y'all had friends. So I

Jem Young
know. You're software engineers.

Augustus Yuan
I had Yeah, I really liked talking is this bot on aim? And? Well, I actually I still don't know how it works to this day. But it's just like, you just ask the questions, they know have pretty good replies. Like, like, how was your day? It'd be like, good. How were you?

Ryan Burgess
And that was so long ago, too. It was super. Yeah, that's really interesting. I hadn't heard of that one.

Jem Young
I think of AI as giving giving a machine known inputs, like human understandable human based inputs. And it synthesizes new information from that, that we don't we can't predict. So that's different between me typing something into Google and it does keyword based search or some sort of algorithm there. But it's all predictable. Versus an AI when we ask it. The same question. We don't necessarily know the response, but we know the inputs. And that's roughly how I think about it. My problem with AI is it's like any new buzzword it's overused, like way, way too much kind of like machine learning or self driving cars, etc. So that's like my, my thoughts around AI so far.

Shirley Wu
Speaking of that, I think it's also really interesting the way that AI has kind of like academic AI has progressed. I only have a view of it from maybe like a decade ago when I took that class. And that in that class, um, what I distinctly remember I'm sure we learned more things, but what I distinctly remember was training pac man to beat the game. And I think that was like directed learning. And we had to tell it like, what our, I think we had to Yeah, we had to train it and tell it like, what are good results? What are bad results. And I think that was like a very like, I feel like compared to now, it almost seems like a little bit primitive. My understanding of where it has gone now is like I think all of those, again, outside looking in all of those things with like the neural networks and how we're trying to instead of it being like kind of I guess the word is retro, actively telling you what's right or wrong that like we're trying to simulate, like human brain connections more. And all of that, to me is very interesting. I think what Jen said, that's like, almost a response that's non deterministic. And that's, that's really interesting to me. And then I guess my carrot like, I have a curiosity, a question as someone that hasn't been following it that closely for like, the last two years. And I've seen these things. People call a ChatGPT I don't I don't fully know what it felt like. Do you think it will? Like, when will it progress? past the point where we're guiding is learning? Have we already progressed past that? Or like, are we still feeding it datasets telling you it was right and wrong? And actually, later on, I will love to also talk about all the controversies of like when it's been really incredibly mis implemented. But for now, I'm curious, like, do you think we're already there? We're, it's already a form of intelligence that does not need the guidance. And the direction, we're,

Jem Young
we're not quite there yet. But we're moving there probably faster than we should, which is why the, I'm thankful the field of AI ethics exists. So like, go after these questions. And I'm a big fan of science fiction and science fiction writers have been delving to this area for a long time about like, what does it mean to be alive? What is what is intelligence? Like? What are the implications of that? We'll get to that later, it should be a good topic, because I have a lot of thoughts on it. I think the most common form of AI that are not AI, the subset of AI that people are more familiar with is like notice machine learning. So you all are pretty familiar with machine learning, see people nodding. So my simple explanation on machine learning is its predictive optimization. So you say like, Hey, given, given every car on the freeway, I want to know how many red cars pass at a certain amount of time, and then compare that to a blue car, something like that. Or you give us some arbitrary problem, and you say, here's the dataset, here's what I want you to, to, here's what I here's what the answer I'm trying to get to, here's the data. These are all the inputs that I give you. And I give those inputs weights. So I say like, Hey, the speed of the car is a weight. So that's important. And the color of a car, I say that's most important, because that's what we're trying to answer versus number of passengers in the car doesn't really matter in the problem I'm trying to solve. So that's machine learning. We've been doing that for many years now. And it's, that's good. Because those are known inputs with a with an output that we know how to solve. And we can roughly trace, like, the steps of the algorithm or the process on how to arrive at certain bits of information. So AI is a broader subset of that where we, we have a question we want to answer, usually in human readable form, or some sort of output. So that's Chechi, GPT, or civil diffusion or something like that. So it takes an input that the computer can recognize, it runs it through this massive data set that we don't know, because it's so big that no one can actually say, specifically what's in that dataset, and then arrives at an answer. And that's, that's essentially kind of between like ML and like, what we're calling AI these days is, we know the information we taught it, but we don't know how it arrived at those answers. Which is, yeah, there's a lot of ethical issues with that and a lot of problems. So it's also super powerful tool. Yeah, I

Ryan Burgess
think that's the thing, too, is like I like the said that gem is that there's like the, you know, no ethical piece to it. But then there's also there is really great use cases for it. And I think that's what we're also seeing a lot pop up around the engineering community is that there's a lot of amazing tools that are available to us now that are actually helping you do your job more efficiently. And that's pretty impressive. And I'm curious what you all think of that too is or if you've even tried some of these tools, and found them useful if

Stacy London
Have you heard of GitHub copilot? And I've not actually tried it yet. But that that was like, the first thing that pops into my head when I think of like AI and engineering. I've heard good things and bad things like good things like, oh, it takes away like boilerplate. You're not having to write a, you know, because that's the most annoying thing. And when you're coding is having to write stuff that doesn't take a lot of thinking. It's just like stuff you have to do to get the code to run. And you're like, Oh, if I could, like, use my brainpower to solve a more difficult problem, that would be a better use of my time. So like, that seems good. But then, yeah, there's like the other side of it. Where, what if I do a technical interview and have GitHub copilot do the answer for me? Is that me showing my intelligence or is that showing? You know, my, what does that mean, for for, for knowing how to code anymore?

Augustus Yuan
was yeah, that was like a huge problem. I think one of the things that happened with GitHub copilot was it started recommending a certain boiler plate that ended up having vulnerabilities that weren't caught. And everyone that used it got that vulnerability. And there's this like, huge issue of that event, too. I love that you brought that up. That was like one of the first things I thought about when I thought about AI and our field and engineering.

Ryan Burgess
Yeah, I think it's it's been a powerful tool in the sense that you get that boilerplate, like Stacy said that you can just, you know, spin something up not have to think about it. Or if you're thinking of a method, it's like, you know, it just autocompletes or gives you those answers for you like, right in your IDE versus having to, you know, jump to Google, which, you know, we've also talked about that on interviews is like, can you use Google for an interview? And in which to me, yes, because like, just because you find something online, you still have to, like, leverage that piece of code or understand it. And so maybe AI is taking it a bit further that that could come into question. Another thing that I was at GitHub universe, I guess that was, I don't even remember which month was sometime last year, whenever that was on. And it was really impressed with even the not understanding like say, if you were just jumping into a new framework, like you'd never touch to react, is you can get a lot of understanding very quickly, by just leveraging Copilot to help you get started. And I thought that was pretty powerful, too. So it's not even just taking boilerplates or getting something started for you that you already understand. It's also helping you understand how to write a new language or leverage a new framework, which I thought it was really cool. I thought that's like, pretty powerful in itself, especially if you were jumping into, say, a new project that you're like, Oh, I know JavaScript, but I don't know, Ember, or whatever framework that team is using. So I thought that was kind of cool to see that.

Stacy London
Is it like squid, the new scripts script, Kitty is the AI kitty or like, you know, like, the copy? Code. Now? We're just having a I do the copy, paste.

Augustus Yuan
Where are you? Yeah.

Shirley Wu
I was gonna say, I think that everybody has been mentioning AI as tools, tool building like AI as just another toolkit for us as engineers. And I feel like that that's the most important part of it for me, that we're viewing AI, not as the solution. But as just another tool, say, in the work we do. And I guess like to when Stacy was like, what does it mean for someone to use co pilot in a in their interview, I was like, they're just using the resources they have available. But that's that's, that's really what like to Ryan's point, like we we should allow Google in interviews because we Google all the time in our work. And in a similar way. Like, that's just another tool that we we need to know, like, and if anything, I think that'd be really interesting to see if like, if it is actually a really good interviewing tool, because maybe there's a difference between someone that uses a co pilot and has no idea what they're doing. And someone that uses co pilot to their advantage and an interview like, can you use that as a way to distinguish like someone's ability?

Augustus Yuan
I think that's a super good call out like, there's a huge difference between views. Someone going to Google and then just copy pasting whatever is the first search result, versus someone who Google's and says, Oh, here's finally the article I was looking for. Yeah, this is the API I wanted to use. I didn't know what second the second parameter was, but this is it or something. Like it is totally different things. And that's, that's kind of what makes ChatGPT really scary because it is so powerful in like, a disclaimer, I've never used it, but I've just seen what people have done. I've heard you could just say write me an essay about some topic and I'll spit out the essay and those People will just copy paste that essay that is just like, that's just bonkers to me. Because I read these essays, I'm like, wow, this is pretty good. Like, this is way better than anything in it is

Ryan Burgess
quite good. And I have tried ChatGPT a few times for certain things more for just kind of dopamine. And I think even the last episode, I said, Oh, yeah, like I felt like it, you could start an email or a doc or something that you're writing from bare bones, nothing, it could, it can kind of give you a bit of like a template to start with. But when you start to read it, the words, they're almost not the words I would use like it. It's like trying to be like over intelligent in some places or explaining something, especially in the technology space where you're like trying to explain a technical detail. It does it, but it just doesn't flow the way I would, right. And so that feels a little bit off. And it doesn't go necessarily into the depth that I would want to. So from that standpoint, I thought like so yes, write an essay. Maybe that's a good starting point for someone and I don't maybe that's not that bad. Maybe it is it's kind of doing some of that research and bringing it into one spot for you. But I don't think you can just hit submit, right? Like, I don't think you're like, Oh, I'm done. I think you have to then really wordsmith it and also add more depth. It's just doesn't do that right now. In my opinion,

Shirley Wu
I have a funny story that's only kind of related, which is that at the end of last semester, we had our thesis papers do grad school. And I was sitting across from a few of my classmates that had not so really started their thesis paper the day it was due. And they were trying to use I can't remember it was touted UPC or something else like a similar tool. And they were trying to use that to generate their paper. But the funniest thing was that the service was down because it was so highly trafficked, because probably many other students. Probably a similar idea, because it was finals week. So there's no, I just thought it was hilarious.

Augustus Yuan
That's so funny that open AI engineers are looking at their seasonal patterns, like, Oh, looks like it's finals week. People are really traffic's really going to crank up. Better scale up.

Stacy London
I think I even saw that there's someone working on something to cap to determine whether check GPT was used to help, like, combat that. So maybe teachers and professors could run that against it and be like, Oh, that was generated. Just fascinating. Also,

Ryan Burgess
which Augustus I think you shared that as a pic last episode. Yeah. Which was like, trying. Yeah, that's really cool. Because like, yeah, you almost need some intelligence on top of that, identify when that happens. And granted, there'll be still teachers out there that are unaware of it. And I'm sure people still get to slide by submitting something that was generated from ChatGPT. But still, I'm curious to hear also from you, like, do you think some of these tools will be helpful in your day to day engineering work?

Jem Young
Yeah. Yeah, I think Brian Holtz said it many, many episodes ago, maybe even 100 episodes ago, but he said, like an intern, or a new grad could do 85% of the work that I do, why I'm a senior engineer, and why make senior engineer money is like that other 15% of those edge cases, or seeing the bigger picture, things like that. So I can see in the future, you know, I want to create an application, I just use some sort of AI tooling, and it spins it up for me. Because like, why, why should I do the boilerplate? Like, I don't know if any of you have like, spin up a node project or react project or something like that? It's like a lot of boilerplate. It's the same thing over and over again. Yeah, it'd be nice if that was just offloaded as something else. Right? Now we build tooling to do that. And like shell scripts in command prompt things like that. But like, what if we just took that a step further and said, Hey, based on the source code that you've seen 1000s of times, what's the most common way of spinning up a new React application? That's nice. It's a tool, it saves us time. The challenge that I see with any AI is, it looks correct. And that's what people said about ChatGPT. It's like it says things factually, and it sounds correct, but it's incorrect a lot of the time. But, you know, humans are lazy, you're never like, yeah, that looks right. And then we're gonna move on with our day. And it could be a giant, imported some library, which everybody knows how to import, but the AI doesn't. And now you have like a giant vulnerable NPM package because you're just chillin or, you know, trying to be efficient. Okay, lazy. So that that's a challenge with a lot of the any sort of AI based tooling without having like the expertise to say like, Yeah, this is correct. This is not correct. And

Stacy London
to troubleshoot it, if it starts breaking or something because you didn't actually build it, then If you don't know how it was built, or why it was built, then you get into trouble too.

Ryan Burgess
Yeah, I think that's a valid point. Anytime you work in any event, like a project that you're just diving into that someone else has written, it takes a lot of extra work to debug something, just because you weren't familiar with it doesn't mean it was bad or wrong. But that's that could be a barrier to is the fact is, there are going to be issues or there's going to be changes that need to be made. And that could be the trade off that you're making is that you now have to go and comprehend what happened or what needs to change. So I think that's a valid point. Stacey,

Shirley Wu
I have two follow up questions. One is, I think it's really interesting to be talking about how there is like, it's like Gen quoting Brian, the a new grad can do 85% of the work. And that actually makes me wonder, like, do you think that perhaps more senior engineers will still have the job security, but like, what does it mean for like, new grads? What does it mean for like, much more junior people, if they don't like either they don't get the opportunity to like, go through and practice building these kind of boilerplate code, and getting the understanding that we have gotten to understand like how to read it to like CC's point like, how to read it, how to discern if it's right or wrong, like, what does it mean for, I guess, onboarding? And what does it mean for kind of developing from junior to more senior? Does it mean that we're going to hire less junior people? Because, like, these AI tools are helping us kind of take care of a lot of the more basic things, and or is that not even a worry about? Like not a worry at all? And the second one is like kind of the second question is a little bit more. I don't know if it's philosophical at all. But like, it just the way that we're talking about, it makes me think of like when you know, dishwashers and like washing machines came in, and like it was advertised to, like help us save time. And what it did was like it freed up some time, and then it like freed it up for us to just toil away at other things. So like, do you think that like, this frees up our time from having to do a lot of the basic things in code? What what does it lead to instead? That's only two questions.

Augustus Yuan
I have some strong opinions about the whole onboarding thing. I think it definitely will impact that. And, you know, I, I personally test junior engineers to understand if they know why they are making certain decisions. And I personally think I let me just say, I personally think AI is really good, and showing those best practices up front. And it's probably on the onus of the students or the engineers to really understand why whatever is recommended by cheers, cheers. Cheers. I just wanted to drink but and I personally don't think that is a new problem. Like, even before AI, like I still, I still remember, I was talking to someone who is still pretty new to programming. And for what it's worth, it was I blame more their teacher rather than them. They were asking for help on JavaScript. And they showed me some jQuery syntax. And I was like, Oh, the reason this isn't working is you don't have jQuery, like the library. And your page, you know, he's like, no, no, no, this is JavaScript. I was like, What do you mean, he's like, No, this is all JavaScript, like, jQuery, you know? Yeah. It's like something I heard about in class. But this is JavaScript, I just need help. Understanding this. I was just like, you clearly don't understand the distinction between those. So I think that's like a problem that has already been happening for a really long time.

Stacy London
But there's something else that's hard to codify is like user experience, or creating a good flow. And like, Part of that's what we do as engineers too. And that's not something you can automate, at least yet. And so I feel like, I guess that doesn't answer anything about junior engineers, but I was just trying to think about this idea of like, what our value is still within this, this AI realm,

Augustus Yuan
I'll say like AI, and at least at Amazon, I'm sure a lot of big companies have this. Okay, I'm gonna use the word again recommend. Cheers, cheers. Cheers. Cheers. There's a lot of internal tools that I feel big. Companies definitely at Amazon that they use to suggest things that other engineers have ran into, like, I know, they have this internal Stack Overflow for common AWS questions for our own internal things, or we also we also have a lot of proprietary internal build tools to generate an app or something. And very commonly, people will run into the same issues like, Oh, why is this broken? Or why didn't this work? And it's just nice to be able to go search search it up. So that's like, somewhere where AI has been super, super helpful for companies at that scale, to just like, have a hub and be able to search and help engineers with that. That's definitely helped me a lot.

Jem Young
I'm not too worried. Yet, I surely I think your point is valid is like, what are our new newbies gonna, like, make it into the field, but I think of kind of the history of humanity, which is, everything we do is built on the knowledge of the people that came before us. There's a lot like I can't, I can do actually very little. So you know, I get my car, I start the car and drive to work, I can't build an engine, like I can't, I can't like, make the computer to make the cargo like we're all everything we do is a product of like what other people do, we don't necessarily know how it works. And that's okay. Say that. So I think like whatever, however, AI factors into modern engineering in the next century, we will adapt, like, you know, it just be like baseline is, you know, what a great application or you know how to write the right prompts to get the results you're looking for. Now, engineering now looks like this. Now, engineering looks like that. So I think we're okay, the, the challenge we're going to run into is, humans are lazy. So we will offload as much decision making as we can. And we shouldn't like there are things we shouldn't do. Like we shouldn't offload responsibilities to computers, through certain through certain amount. There's plenty of examples there. I don't know if we want to touch on the negatives of AI yet. But overall, it's a tool I'm not super worried about, it's more like, you adapt or die, just like computer used to be the name of an occupation. It was a computer like literally someone that computes numbers, they got replaced by a actual computer. And now that's not a thing anymore. Or a lot of accountants went out of business in the 80s. When the spreadsheet was invented, it turns out, that's most of what accountants do. It's like the old Sydney summers. And, you know, we're still okay, we're still together as society. So we will adapt to whatever technologies we come along. It's more like that that edge case of where do we draw the line on? Yeah, maybe. Maybe machines shouldn't be making these calls, like, maybe it shouldn't decide who's going to jail and their sentence, maybe it shouldn't decide, oh, there's a limited number of hospital beds, we're going to run it through an algorithm that determines who gets it because that's more fair. There's many aspects that we really have to be critical of, and slow to adopt. But I don't know that at the rate we're going already. We've we've already exceeded what we know what we can control. So yeah, it'll be interesting. I think it's really up to us, as engineers to say like, No, this is unethical. No, we're not going to do this. But I don't know if Silicon Valley doesn't have a great track record on saying no to unethical things, if there's money involved, if we're being honest, we'll make

Ryan Burgess
mistakes, like we definitely will. And I think that to your point, Jem, it is a tool we will adapt, I think it will be a productivity tool, like there are ways in which this will be take some heavy lifting off of us. So not to put us out of a job because I think that's the beauty of some of these things is that you still need to build up some portion of those skill sets and so I mean, I've seen it my own career where I've like there's certain things I used to have to do that now we're just done so much easier whether through you know, build scripts or through you know, just even some libraries or frameworks that just do a lot of heavy lifting for you. You just adapt and then take it that much further. And so I do believe in that a lot. I do think we should be touching on some of the negatives of AI like a big portion even at the top when I started or intro I mentioned the AI generated artwork that to me i I question a lot and I'm sure others do too is I feel like artwork when it's generated like that are trying to be so perfect. That's we're losing something we're losing the artistic ability of like a human being and it to me isn't that it's not really that cool or appealing to me to see that like cool. This was just generated by a computer is there's something to be said about artwork. The the flaws that come in a piece of art are actually make it great. Like, it doesn't have to be perfect. And so that's something that has bothered me, for sure. I'm sure there's other things that bother people with AI in general. But that was one that really stands out. For me

Shirley Wu
currently being in an art and technology program, I see a really large spectrum of opinion about AI AR, and the way that it's being used. I think that it is, just like what we've been talking about with the engineering tools about I think there's a way to use it, that makes sense. And then I think there is a way that is being used right now that is extremely harmful. I think the the way that I think makes a lot of sense to me about AI generated art, is I've seen classmates use Midrand and Dolly as a way to generate inspiration. And so they'll have an idea in their mind, about what they want their, what they want the project to look like. And perhaps before we ever had gone to like Pinterest or arena or google images to like, find an image that's close enough to what we're trying to express a prototype, like as as a kind of an idea communication, not the final of what we're trying to create, but to try and communicate what our intentions are. Or maybe we will have even just like, tried to sketch it ourselves. But now, I'm there, I haven't tried it. So I'm gonna say they, they're using it in like, you know, say a class presentation to be like, this is ultimately what my prototype will look like, or feel like, and this is exactly the mood that I'm trying to generate. I think that's super cool. And I think it makes a lot of sense. Having said that, I think there is the other side where people are using it as a way to like either I think the most egregious is like when they're passing it off as their own work. Um, and I think there's a lot of conversation to be had. And I think this definitely falls under the AI ethics conversation of what did it mean, when people trained are on like, the 1000s of hundreds of 1000s of like, the works from masters that most of whom have already passed away, and, and sold that art, I think, like, I feel complicated about it, but I think it's okay, because in Dead masters like, they're, they're not, they're not reaping the benefits of their art anymore, right. But like, I feel extremely strongly about all of the models that have been trained on, like images taken from deviant art, or Instagram, or any of the places where current artists, current illustrators that have spent years and decades like perfecting their craft and perfecting their style, and like getting to a place where they can have a style of their own that they're proud of. And suddenly, that is being used, like someone else is just writing a few sentences, and getting that style and then being able to claim that as their own. I think there was some app recently that was like, you can pay $7 To like, generate a bunch of your own profile photos with like, all of these different artistic styles. And I think it went around and like people were really excited about it. And that really sucks because they're trained on models from like, these, like 1000s of artists who are never going to see like a single dollar of those $7. Right. Um, and that I feel extremely passionate about of like, that's something that needs to be addressed. And that that needs to be like, there are already people in the industry that are starting to kind of voice against this that like, yes, there is a thing about how technology has always taken that way people's jobs, like, you know, the the most prominent example is like photography took away a lot of painters jobs. But that's something that like, then I think it makes a lot of sense that artists had to adapt to that and like and create new art based on that. And I think that AI art becoming a thing is another one of those examples where like artists will adapt to that. But I think where the line should be drawn is artists jobs getting stolen, because literally their style and craftsmanship can be replicated in just a few seconds. Okay, I think I'm done with my soapbox.

Augustus Yuan
I want to add to that because it it is kind of crazy, especially in the art community. You know, as a subreddit moderator, something recent happened was the subreddit art. It's a forum for artists, there is an artist who got banned, because his art work resembled AI art. And it was like it sparked huge outrage. You know, this guy, he had to go through many lengths to prove that he like, painted or did whatever artwork he did himself. And it's, it's just like, almost insulting to hear that your artwork, or your art style is looks like Ayar, I guess? And yeah, I'll link the article after it. Yeah, there's just like this. It's crazy. What the world has kind of come to

Stacy London
Yeah, and I saw something like mid mid journey, like people complaining, yeah, like the training sets can include copyrighted art, or like art from living artists. And that's like, a huge concern. And they're like, Well, you can, you know, Terms of Service, you can do a DMCA takedown, you know, and fill that out. But it's like, should the date is should that training set? Have you been included that stuff? Unless the artist agreed to it? You know, there's so many questions. Oh, and

Shirley Wu
I have, because I follow a lot of artists, because I have art friends also. And even just like DMC like, because they're all like they're not, you know, the like top 0.1% of artists making millions of dollars with like a lawyer team, like a legal team of their own. They're just, like, very average, people, probably just making a very, like modest living doing what they love. And they already get taken advantage of so much. Because our society is like, well, or, you know, you're doing what you love, like you don't need to get paid for while doing what you love. Anyways, that's a whole other side thing. And so like, on top of their spending their like hours and hours of creating their art, they now have to track down people to like, send em Wait, DMCA, DMCA notices to and they've already had to do that for copyright things, even before AI of like people just taking their art and like putting it on T shirts or mugs and like selling them for like $20. And now they have to track down people using AI. And that's like an even harder thing to prove, right? Like, because like how, like, how are they going to prove that there are was part of the training set that MIT journey and Dolly and all of these things. Don't even say what their training set was. So I think that's a stupid thing to do.

Jem Young
The challenge we're running into is like we are slowly seeding like our humanity over to machines and algorithms and all that. And you could say there are benefits to that. For sure. Like, it saves us from doing repetitive tasks and having to memorize lots of useless information to get to like the core of what we're trying to do. But like, that's so sad. It's, it's not a great thing for us in general, unless we're like, very like, here's, here's what we're going to use AI for. Here's what we're not gonna use it for, and have a very hard line. The problem with all this is like, no one's responsible. And Charlie, your example. And I forget what the it's an open game, I forget what the largest like image training set is what were like a lot of these, like, stable diffusion and maturity, and all these are getting like their datasets. But there's one big one, it's from Europe somewhere. And so like artists are complaining about like, Hey, you stole my art, it was clearly under copyright. And like, it was all over the place. Yet somehow you scrape that. And the people that create a dataset are like, Oh, we don't know, like, we just, we got this from somebody else. And then like, okay, who'd you get it from this other company that does image scraping? And they're like, well, we just scrape the web. And, you know, it wasn't us because like, we weren't looking for that, and fair use whatever. And like, it's a lot of finger pointing, but no one takes responsibility. And that's a problem, where it's nobody's fault. But like, and but that's what we're going to is like nobody's in charge anymore. We just care about the output, even though like we built these things, and it's kind of taken over what we're doing. I don't know. It's when I when I was a boy. I read a sci fi book, there was a man who jumped to the future. And in the future, humanity just lives a life of leisure. And everybody just chills like food is plentiful. There hadn't been a new book or movie in like 200 years because people just stop writing them because computers can just write them for you. So we just kind of stopped Often, you want to make a new movie and just create one on the fly. And it's like, that's cool. But I don't know, that's just depressing. Is that the future of our species? And I think that that when people talk about like the dangers of AI and things like that, it's not like the Terminator style robots are gonna come kill us. It's, it's like slowly inching away of the things that make us human, like our quirks and our flaws into like, this smooth perfection based on the optimal ideal from what the training site tells us. And like it is our imperfection that makes us great. And that's the real danger is, we become like in that Walley world or, you know, remember, Wally, the movie where humans are like, messing around with this. Yeah. But wait, like, we laugh, but that's what we're moving towards. And that's what honestly, I hate, say tech companies would love to be the people that make that chair. And they have no problem with that. And that's why we need AI ethicist to say, like, No, we're drawing a line here. And here's what we're not going to do. Like collectively as a as a society as a whole. We're not going to do this. But I

Ryan Burgess
also wondered on on all that, like, I agree, Jen, like, yes, there needs to be like lines drawn and like regulations. But I also wonder, too, to the point of generate a script or generate creating a book, like, the computer can only be smart from what's historically there, right? Like, it's datasets that we just spoke about, what about the creativity that someone just pops into their head, right like that, like all the new types of things that we're seeing that someone has just thought up, or a new style of painting, or just some unique way of doing something? I don't think the AI can replicate that, I guess, like it can take some signals and try and do that. But like, there's definitely things that we will lose out on, if that's the direction like there is creativity from humans, that you just can't predict. And you know, like even innovation that is happening, who would have predicted certain things that have been created, that someone had to think that up and run with it. And I don't know that the AI can do that. I think it can take pieces, and maybe suggest certain things, but I think it's going to lock that creativity, the innovation,

Stacy London
I wonder if we as a society will reject it, too. Like, if you think I've heard some stuff about like, Gen Y or whatever, like, rejecting smartphones and going back to like, getting together in person and talking. Like

Shirley Wu
Gen Y, again, there's Gen Z, and we there's millennial, Gen Z is Gen Y, like after Gen Z. Is that where we're at?

Stacy London
I think so. I don't know. I think it's the latest. Yeah. Okay. You know, will we reject it as a society, especially because there's like, you know, there's, there's these studies about people feeling lonely, feeling more isolated than ever, wanting connection and story. Well, who can tell stories, people who have experiences or humans who have experiences and the story is the thing like, half the time, with a painting or something, there's a story behind it, we're really interested in that. And like, with a AI generated thing, there's no story, it's just cold and generated. And they can't tell a story so like is, is that something like we as a society will just not be interested in because it's kind of boring.

Shirley Wu
I actually both agree and disagree. I think the crates, I want to go back to the creativity part. Because I've been thinking a lot about what creativity means. I'm like, what, what makes someone creative? And I think there's like a interesting consensus, that creativity is about taking things from our lived experience, and all of these different, like disparate things that we've come across and then like mushing them together, and they're like, a new and new way, like, like Ryan's unique way, and then coming up, and then that's, that's creativity of like, it's, it's taking all of these different things and drawing a line through all of them and being like, Aha. And I actually do feel like AI can get to a point to do that of like, it already has all the information of weird lived experiences or weird datasets. And I do you think it's just maybe a step or two away from like drawing those connections. And then it's just like how we creatively sometimes has hits and sometimes misses. I think that it will have ideas that are like it will get to a place where it can have quote, unquote, creativity in the sense that it will generate some ideas that are interesting, and then will generate some ideas that are misses and I think what it's currently missing is being able to jump jump Is that a hit is that are missed. And I feel like that can even be trained in a way of like, if it has some sort of feedback of like, what people like and what people don't like, because that's that's how we react to things. And so I do think we can even be like, things that we relate to that we like things that we like, are things that we can relate to. Well, that that can be codified, too. But I think what Stacy, what I really, really agree with is like, we as humans are gonna reject that because we as humans, want that human connection. Ultimately, I think we might, it might be that we don't want to relate to machines, and we want to relate to other humans. But also there's a lot of sci fi books are like, you know, people falling in love with robots. So what do I do?

Ryan Burgess
Well, on that note, surely, I think that's a great way to jump into pigs. We're falling in love with robots, apparently. So in each episode of our podcast, we'd like to share pics of things that we found interesting. Want to share with all of you, Stacy, you want to start it off?

Stacy London
Sure. So my first pick is Jeremy, get us. I hope I pronounced his last name right is a painter that I really like. And I actually purchased a print of, of one of his works. He started Instagram account that is him exploring mid journey, which is fascinating. So he's giving it these prompts and just playing around with it. So it's, it's interesting to watch a someone who is a painter that I really appreciate digging into that and kind of seeing how he feels about it. So that's my first pick is his Instagram can check that out. The second pick is it's a non electronic music pick. It's indie rock this time, but I thought it'd be like a funny topic. expert in a dying field by the baths. Just the idea of what if our jobs go away? We're experts in a die in a dying field. Maybe?

Ryan Burgess
Jem, what do you have for us?

Jem Young
I have two picks today. The first one is relevance. It's an article on my my favorite publication, which is Ars Technica, if you don't read through them, it's fantastic. They cover everything. They're pretty critical of technology. But so we're three in the publication. But the title of the article is called a controversy erupts over non consensual AI mental health experiment. So surely, it's really relevant to what you're just saying about like people want that authenticity of, of a real person. So what what happened was, there is a company and this is very recent, there's a company that says, hey, we provide mental health support through a discord channel. But without telling people, they were actually just talking to open AI or tachy GPT, I forget which one, which, like, the ethics of that is really poor. Because the whole, the whole advertisement for this company was like, we connect you to other humans who understand how you're feeling and feel the same way and like can help you out. And then you just offload them to some computer for help. And it's like it, it misses the point. That's very, all mental health is A plus B equals C, and we just need to solve this equation. So like, we'll give it to a computer because computers are really good at solving equations, when like, you know, mental health humans are super complex. So like, it turns out, when they told people, they're actually talking to an AI, the responses were like, awful. And like the reaction was terrible, because people really wanted to talk to another human being not a chatbot. And like that, you can look back and remember, three years ago, four years ago, chatbots were like the hot new thing. And then it turned out, they're terrible, because like one there, they're bad at predicting. Yeah. They just weren't very good. But also, like, I want to talk to a real human if I need help, because they can parse what I'm saying much better than a computer. And I just want to know, like, I'm not just a cog in the machine. Anyways, it's a good, good article, but it's more what, what it says is, like I said, we're really lazy, even as engineers, we're really lazy. And if people say like, Hey, we can not do all this work and still make the same amount of money by offloading this to Chechi GPT or AI, they're gonna do it. And like, there's tons of ramifications beyond that, and I think we've talked about it before like, you know, humans are racist so AI is inherently like racist because like that's just the way humans are. We're not training AI to be better than us. We're just training us to be a super version of us. And like we've seen this over and over and over again. That's a we could have done a much longer our on the dangers of AI and the training sets. And anyways, so read the article. It's it's worth reading, but no, like, we'll see more of this in the future and as engineer was like we should be calling us out when we see behavior like that. So on a slightly lighter notes. I have a question for you all today for my valley silicon thick. How much would you pay to walk faster? How much faster we'll say seven miles an hour according to the company seven miles an hour faster, which is pretty fast actually

Shirley Wu
wait 707 seven miles $700

Ryan Burgess
I'm with Stacey like, I feel like it's GIMP is gonna be gimmicky. So it's probably like, maybe 100 $200. I'll give it

Jem Young
all right, well shift robotics. At shift robotics that IO, they have a product called the moonwalkers. And for $1,400, you can wear these shoes, I call shoes, or isn't very heavy quotes that will allow you to walk up to 70 miles an hour, these shoes, and I encourage you, front and half our regulars to go check that out. They're kind of just rollerskates, essentially. But they're powered by quote AI, which again, doesn't really mean anything. But it's like watching the demo ice, it's like controlled skating, where it slows up in speed sounds, so you kind of just like glide a little bit, which is a cool idea. However, my wife Valley still can pick and why. For those who haven't listened before, Valley Silko is all about products that exist because we in Silicon Valley make too much money. Like why this thing doesn't really need to exist is it's not really solving a problem. Like I would love to walk faster. However, I'm an I'm an able bodied human, I can walk at normal speed. What I want to see is a product for people who are elderly or have problems walking something like that. This does not solve this. The shoes are how much they are 4.2 pounds apiece, which I don't know if you ever had like weights on your shoes. So like pretty much all the people that can use this are young and fit. People that want to spend $1,400 to maybe walk faster. So I don't I think it's cool. Hopefully psychology works, but Color Me skeptical. I'm not sure it needs to exist.

Stacy London
Can you go backwards?

Jem Young
I don't think you can go back. Good questions. Come

Stacy London
moonwalkers.

Ryan Burgess
onpoint awesome. Oh, surely. What do you have for us?

Shirley Wu
Yeah, so the first person on my recommendation for name is Yuko Shimizu. She's a very, very accomplished illustrator with a beautiful kind of style of her own. And she has recently been posting a lot about AI generated art. And part of my opinions are informed by her writing and her experiences. She is one of the people that has been deeply impacted because her style has been trained into these AIs. And so I've linked her Instagram, you know, can just enjoy her art. They're beautiful, or follow her as he talks about AI. And the second person. Her name is Angelou, this is she is recently and she is an artist I've recently been recommended, has nothing to do with AI yet, I don't think. But he's an artist that also works at the kind of intersection of art and technology. I mean, uses technology as the medium to kind of I will say that her projects and works are very, like interesting social commentary. I think her latest, has been about motherhood and surrogacy because that's what she went through. But yeah, very interesting commentary, nothing to do with AI. I've just been really, really loving her work. And are we a little self Ploetz on fit? I can't remember. Oh, yeah, I have one self plug. Usually I don't like Self plugging. But I'm extremely, extremely proud of this. So last summer, I gave a talk at IO festival. It is the most personal and vulnerable talk I've ever given about my identity and about burnout and about how different cultural factors Silicon Valley included, have made me make myself small. And it's also about kind of about realizing that recognizing that and, and dreams and hopes and and healing from that through my work. And I Yeah, it's really vulnerable. But I'm really proud of it and I hope you'll give it a watch.

Ryan Burgess
Awesome. Thank you. Surely, Augustus. What do you have for us?

Augustus Yuan
Yeah, apart from the pic about the artists being banned from the subreddit, I'll be included. I have two picks. One is a blog article from pre play IO, how we rebuilt react dev tools with replay routine's replay is I'll be honest, I don't know the best way to describe it. But it's a debugging platform that you can use for front end applications. And I just thought the article was really well written, it gives a lot of good context of how react Dev Tools works, what replay routines, which is, whatever they have as part of their service, and how they kind of were able to rebuild reactive tools. So I thought that was a really, really good read. And then my second pick is a grinder. It's called the Baratza, and core. I'm really into coffee, this is a really great starting coffee grinder. Don't go on espresso, or coffee subreddits, or else you'll end up buying like a $20,000 setup or whatever, they'll convince you pretty easily. But this grinder is really good for beginners, because you can mod it, you can change the burr grinder inside with a lot of work. And you can get like a pretty high end grinder for for not too much. So it's like you can start off small and if you want to upgrade it, you totally can. So check it out.

Ryan Burgess
Awesome. I have no AI picks that are related to this topic. I have two picks. So my first one is the show Kaleidoscope it's a Netflix original series about hice you've heard me talk about heists in previous episodes. I like a good high story. So I've obviously had to watch this on it's a little bit gimmicky that times but overall, I enjoyed it. I was it was a good story. I also really liked that the story, you can watch it from any episode, right? Like it's, there's no you don't have to follow the order that they say. But we also randomly give people different episodes. So like you might go watch the order that you're given. I'm gonna go watch the order that I'm given, which I think is kind of cool. It was a different take on things so that was one that was maybe a gimmick that I wanted to see anyways, and so it was kind of cool. I highly recommend checking that out. Then the other day on Twitter probably like a week ago I had posted a product that I bought my wife was this like little candle flame thing. I should even call it a candle because it is literally like a fire in the size of a candle. So it's a miniature little indoor fireplace it's pretty cool it burns off like an alcohol it puts off quite a bit of heat and flame it's it's really cool like sets a nice like mood the level of fire going but it was really funny that I mentioned it on Twitter because like a lot of people are like oh my god the house is gonna catch on fire. What if a cat knocks it over? I'm like well one I don't have a cat. I do have kids but you know don't put it in your kids or turn it you know light it when they're not around. But yeah, it's it was really cool. I really liked it. It was a cool gift. She likes it so I thought I would add that as a pic. Thank you all for listening today's episode. You can find front end happy hour at front end happier.com You can subscribe to us on relay whatever you like to listen to podcasts on. Follow us on Twitter at front end H H. Any last words should we ask ChatGPT Terminator