Google Bard is out—sort of. Google says you can now join the waitlist to try the company’s generative AI chatbot at the newly launched bard.google.com site. The company is going with “Bard” and not the “Google Assistant” chatbot branding it was previously using. Other than a sign-up link and an FAQ, there isn’t much there right now.
Google’s blog post calls Bard “an early experiment,” and the project is covered in warning labels. The Bard site has a bright blue “Experiment” label right on the logo, and the blog post warns, “Large language models will not always get it right. Feedback from a wide range of experts and users will help Bard improve.” A disclaimer below the demo input box warns, “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.”
Google’s Android keyboard and spell checker still can’t get “it’s” vs. “its” right 80% of the time, so I’m not holding my breath.
It’s all good, Zuckerberg was arrested at the launch!
Appreciate the reference to Gboard. For me it actually never lets me write “its” without changing it to “it’s”. And with my native language it’s even worse as our grammar is different and basically every time I use a new inflection on a new word, Gboard will change it to some previously used inflection instead. The strange thing is, Gboard used to great a few years ago, but now it’s autocorrect is so bad it’s basically making twice as many mistakes on its own that I do myself.
And this name… Bard? It sure sounds like something that Microsoft would use in desperation when trying to make a mediocre product sound hip.
These are all basically the same. But you know when google deviates from just labeling things beta and has to use “Experimental” that its really alpha.
So far with Bard has given me bad baking advice, and suggested imaginary restaurants. On the plus side it did some nice filtering of restaurant options. Its like if you ask more specific questions you’re less likely to get made up answers. I’m cross checking a lot of things with Google products like search and maps, I’m not sure why they don’t do the same. Like I’m going to suggest these restaurants, do they have entries on google maps? If not, well don’t display them. So use with caution.
I think now the race is on to get consumers to figure out how and when to use these ai bots. Like for restaurants yelp or google search is better in most cases now. Unless you want deep specifics about the restaurants. The semantic web was supposed to allow us to search for things like menu items in a much easier way, but that ship has sailed, so I guess its throwing tons of computing power at the problem to get to some kind of deeper understanding of the web. I kind of think for these general use cases like bing and bard, these ai things are likely a fad and need to sit deeper in the stack of search/maps once they’ve matured.
Do you really mean it’s worse than flipping a coin? I’m asking honestly and not sarcastically, as I have no idea how it performs since I turn off autocorrect as the first thing I do with a new phone.
Looking back the discourse so far, I think:
“people are using the AI wrong”
Asking questions that existing mature systems can respond to BARD or ChatGPT is not their best use. Who is the first president? What is Beethoven famous for? These are better answered on Google, Bing and Wikipedia.
On the other end of the spectrum, expecting them to write entire applications from scratch is also folly. At least in these iterations.
What they do best is in the middle. They can act as very capable personal assistants. For example, looking at your code, and suggesting fixes, unit tests, or even trans-compilation to another language. Or helping write your blog posts. For example, this one.
Basically someone called it “everyone having a white collar helper”, not an experienced one, but a rookie, yet still useful.
Bard’s rewrite:
People are suing AI in the wrong way. Asking questions that existing mature systems can respond to, such as “Who is the first president?” or “What is Beethoven famous for?”, is not the best use of AI. These questions can be easily answered on Google, Bing, or Wikipedia.
On the other hand, expecting AI to write entire applications from scratch is also a mistake. At least for now.
AI shines in the middle. It can act as a very capable personal assistant. For example, it can look at your code and suggest fixes, unit tests, or even trans-compilation to another language. Or it can help you write blog posts, like this one.
Someone once called this “everyone having a white collar helper”, not an experienced one, but a rookie, yet still useful.
I agree with this sentiment. AI is a powerful tool that can be used to automate tasks, but it is not yet capable of replacing human intelligence. However, it can be a valuable asset for anyone who is willing to take the time to learn how to use it effectively.
And GPT-4:
Reflecting on the conversation thus far, it appears:
“Individuals are misusing AI”
Utilizing mature systems like BARD or ChatGPT to answer simple questions that can be easily found on Google, Bing, or Wikipedia is not an optimal use of their capabilities. Queries such as “Who was the first president?” or “Why is Beethoven famous?” are better suited for these platforms.
Conversely, expecting these AI systems to create entire applications from scratch is unreasonable, at least in their current versions.
Their most effective role lies in the middle ground, where they can serve as highly proficient personal assistants. For instance, they can review your code, recommend improvements, suggest unit tests, or even facilitate trans-compilation to another language. They can also assist in composing blog posts, like this one.
In essence, it has been described as “providing everyone with a white-collar aide,” albeit a novice rather than a seasoned professional, yet still valuable.
Hit the nail on the head. When do people use it? How do they know when to use it? Its a educational task. I’m not sure if people are going to figure it out without some guidance from Google. Otherwise this is another Alexa/echo product where its kinda useful for some small niches, none of which are that profitable for anyone.
Bill Shooter of Bul,
Exactly, if you don’t ask the right questions, it is unlikely these systems will give the right answers.
I would liken this to the current state of the art “self driving vehicles”. If you expect an competent chauffeur, you’d be disappointed, or even worse. But if you use it as a tool to reduce the boring parts of driving, they are immensely useful.
SNL predicted this would happen.
“Robot Presentation”
https://www.youtube.com/watch?v=i0dvv4fTiqA