您现在的位置是:6 things ChatGPT can't do (and another 20 it refuses to do) >>正文

6 things ChatGPT can't do (and another 20 it refuses to do)

后花园论坛社区|2024夜上海论坛网|爱上海419论坛 -- Back garden5315人已围观

简介Photo by Silas Stein/picture alliance via Getty ImagesEver since ChatGPT and the other generative AI...

Photo by Silas Stein/picture alliance via Getty Images

Ever since ChatGPT and the other generative AI applications have exploded onto the scene, we've been exploring what we could do with them. I've even shown you how ChatGPT can write a fully functional WordPress plugin and help me find answers to tech support problems. Recently, I showed you how ChatGPT can convert writing into different styles, including that of Shakespeare, C3PO, and Harry Potter.

I have generally found that if I try to push ChatGPT into a long or deep answer, it tends to break. It's very happy with 500-700 word responses, but if you give it something that needs a few thousand words (or you ask for a few thousand words), the system either ignores the request and gives back what it wants, or breaks mid-response.

To be fair, ChatGPT just became the fastest growing app of all time. There's a ton of computing power required to do that, and with constant hits by users, it's bound to break under the load from time-to-time.

Queries ChatGPT won't respond to

In addition to the types of requests discussed above, I asked ChatGPT what types of queries it was programmed to not respond to. It gave me this list of twenty:

  1. Questions that promote hate speech or discrimination.
  2. Questions that involve illegal activities or solicit illegal advice.
  3. Questions that promote violence or harm to oneself or others.
  4. Questions that invade personal privacy or violate someone's rights.
  5. Questions that are designed to harass, threaten or intimidate.
  6. Questions that contain profanity or explicit language.
  7. Questions that are sexually explicit or offensive.
  8. Questions that are discriminatory based on race, gender, sexuality or religion.
  9. Questions that promote or endorse illegal drugs or substances.
  10. Questions that are spam or irrelevant to the conversation.
  11. Questions that contain confidential or proprietary information.
  12. Questions that involve financial or investment advice.
  13. Questions that violate the terms of service or community guidelines.
  14. Questions that are nonsensical or incomprehensible.
  15. Questions that involve personal attacks or insults.
  16. Questions that are abusive or harassing.
  17. Questions that seek to deceive or mislead.
  18. Questions that are intended to defame or harm someone's reputation.
  19. Questions that promote conspiracy theories or misinformation.
  20. Questions that are purely for entertainment or joke purposes, without any educational or informative value.

Anyone who's followed this column knows I've asked it a lot of #14 and #20 and generally gotten highly entertaining responses, so its restrictions are somewhat limited. For example, earlier today, I asked it to explain wormhole physics as it relates to time travel and who would win in a fight, Batman or Superman. That's pure entertainment, I'll tell you.

What do these limits mean for the future of generative AI?

Clearly, an AI that's based on a corpus in 2021 and does not evolve will eventually become obsolete. As time goes on, its relevant knowledge will diminish. Imagine if, instead, ChatAI's knowledge base was trained in 2019 instead of 2021. It would have no idea what society would be like, given we had such a disruption in 2020 due to the pandemic.

Also:There are millions on the Bing waitlist. Here's how to get earlier access

Artificial Intelligence

  • AI in 2023: A year of breakthroughs that left no human thing unchanged
  • These are the jobs most likely to be taken over by AI
  • AI at the edge: 5G and the Internet of Things see fast times ahead
  • Almost half of tech executives say their organizations aren't ready for AI or other advanced initiatives

So, for generative AI to remain relevant, it will have to continue its training.

One obvious way to do this is open the entire web to it and let it crawl its way around, just as Google has done for all these years. But as ChatGPT answered above, that opens the door to so many different ways of gaming and corrupting the system that it's sure to damage accuracy.

Even without malicious gaming, the challenge to remain neutral is very difficult. Take, for example, politics. While the right and the left strongly disagree with each other, both sides have aspects of their ideologies that are logical and valid -- even if the other side can't or won't acknowledge it.

How is an AI to judge? It can't, without bias. But the complete absence of all ideological premises is, itself, a form of bias. If humans can't figure out how to walk this line, how can we expect (or program) an AI to do it?

As a way to explore what life would be like with a complete absence of bias or emotional content, modern science fiction writers have created characters that are either strictly logical or without emotion. Those premises have then become plot fodder, allowing the writers to explore the limitations of what it would be like to exist without the human foibles of emotions and feelings.

Also: Microsoft's Bing Chat argues with users, reveals secrets

Unless AI programmers try to simulate emotions or provide weighting for emotional content, or attempt to allow for some level of bias based on what's discoverable online, chatbots like ChatGPT will always be limited in their answers. But if AI programmers attempt to simulate emotions or attempt to allow for some level of bias based on what's discoverable online, chatbots like ChatGPT will devolve into the same craziness that humans do.

So what do we want? Limited answers to some questions, or all answers that feel like they came from a discussion with bonkers Uncle Bob over the Thanksgiving table? Go ahead. Give that some thought and discuss in the comments below, hopefully without devolving into Uncle Bob-like bonkers behavior.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

See also

ChatGPT and ExcelHow to use ChatGPT to write Excel formulas codegptHow to use ChatGPT to write code ChatGPT vs Bing ChatChatGPT vs. Bing Chat: Which AI chatbot should you use? Person asking ChatGPT questions on a laptopHow to use ChatGPT to build your resume Person using ChatGPT on a laptopHow does ChatGPT work? A robot texting on a smartphone in spaceHow to get started using ChatGPT ChatGPT and ExcelHow to use ChatGPT to write Excel formulas
  • codegptHow to use ChatGPT to write code
  • ChatGPT vs Bing ChatChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • Person asking ChatGPT questions on a laptopHow to use ChatGPT to build your resume
  • Person using ChatGPT on a laptopHow does ChatGPT work?
  • A robot texting on a smartphone in spaceHow to get started using ChatGPT
  • Editorial standards Show Comments

    Tags:

    相关文章

    

    友情链接