Tag: AI

  • Are You Facing Issues With Creating AI Images From Gemini? Google Explain The Reason

    New Delhi: Google has officially recognized issues with its Gemini model’s AI image generation, particularly concerning specific prompts. The tech giant stated that users requesting various images related to a particular culture or historical period should receive accurate responses. However, this hasn’t been the case, with Google attributing the problems to its “tuning” measures.

    Explanation Of Shortcomings

    In a recent blog post, Google delved into the factors contributing to problems with its Gemini model’s AI image generation. The company highlighted two main factors. (Also Read: From Investment To Income: A Rs 5-7 Lakh Investment In This Business Idea Could Yield Rs 1.5 Lakh Monthly Returns)

    Firstly, their tuning process, aimed at ensuring Gemini could display a diverse range of people, overlooked scenarios where a varied representation wasn’t appropriate. Secondly, over time, the model became overly cautious and started declining certain prompts altogether. (Also Read: User Receives Fake iPhone 15 From Amazon; Company Responds)

    Temporary Pause On Image Generation

    Google admitted that their recently launched news image generation feature for the Gemini conversational app, which included creating images of people, missed the mark.

    Some generated images were inaccurate or even offensive. In response, Google temporarily paused the image generation of people in Gemini while they worked on an improved version.

    The company emphasized that this outcome wasn’t their intention and reiterated their stance against deliberately creating inaccuracies, especially with historical content.

    Actions To Address Issues

    To resolve the problems, Google plans to subject Gemini’s AI image generation to more testing. However, the company mentioned that they can’t guarantee Gemini won’t make mistakes or produce embarrassing, wrong, or offensive results even after fixing the issues. Nonetheless, they promised to take action whenever problems arise.

    Recommendation To Users

    While the Gemini AI model undergoes improvements, Google suggests users utilize Search’s AI image generation, which gathers “fresh, high-quality information” from the web.

    Temporary Halt On Generating Images Of People

    Following a backlash over inaccurate results, Google temporarily suspended its Gemini AI model from generating images of people.

    This decision came after users shared images created by the model, primarily featuring people of color, including scenes from history that only involved white people.

  • Artificial Intelligence To Save More Time For Chartered Accountants, Says ICAI Prez |

    New Delhi: Artificial intelligence will provide a helping hand and free up a lot of time for chartered accountants to focus on analytical work, ICAI President Ranjeet Kumar Agarwal said on Wednesday and highlighted that there is a huge demand for chartered accountants.

    The Institute of Chartered Accountants of India (ICAI) expects there will be a need for around 30 lakh chartered accountants in the next 20 to 25 years and last year, around 22,000 students cleared the chartered accountants examination.

    At a briefing in the national capital, Agarwal, who took charge as the president on February 12, said a committee will be coming out with a roadmap in the next two months on the use of Artificial Intelligence (AI) at the Institute of Chartered Accountants. of India (ICAI). (Also Read: Govt Plans For Simplification, Digitalisation Of KYC Process)

    He emphasized that Artificial Intelligence (AI) is a tool and is saving a lot of time. “This will give you (chartered accountants) more time for application of mind, analytics… I believe AI is going to be a helping hand for the profession of chartered accountants so that they can focus more on other analytical areas”.

    Adding further, ICAI President said that “AI taking away the compliance part from the chartered accountants and giving them more scope to work on bigger areas… AI cannot overtake human intelligence,”

    In the audit profession, for instance, to check a 700-page annual report of a listed company, one can make a PDF and put it in Chat GPT. Now, one doesn’t have to read the 700 pages as an auditor. “You have to ask the questions like what is the profitability, what is the adverse comment… Whatever you ask, Chat GPT will answer,” Agarwal said.

    Amid instances of chartered accountants coming under the regulatory scanner, Agarwal said the institute has self-developed “so many checks and balances”. The target is to have “lesser anomalies”, he said, adding that efforts are made to continuously enhance the skills of the chartered accountants. For non-compliances, ICAI is also taking action against members, he added. (Also Read: Salaries In India To Increase By 9.5% In 2024; Infra, Manufacturing Sectors Lead)

    The institute will also provide suggestions to the government on increasing the tax-to-GDP ratio as well as on green finance. The tax-to-GDP ratio, which is less than 3 per cent now, should improve for the country to become a developed economy by 2047. In the developed countries, the ratio is around 22 per cent, he noted.

    Besides, a committee will identify irrelevant laws and make suggestions in this regard to the government.

  • OpenAI, Meta And Other Tech Giants Sign Effort To Fight AI Election Interference |

    New Delhi: A group of 20 tech companies announced on Friday they have agreed to work together to prevent deceptive artificial-intelligence content from interfering with elections across the globe this year.

    The rapid growth of generative artificial intelligence (AI), which can create text, images and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major elections this year, as more than half of the world’s The population is set to head to the polls. (Also Read: OpenAI Can’t Register GPT As Trademark, Rules US Patent Office)

    Signatories of the tech accord, which was announced at the Munich Security Conference, include companies that are building generative AI models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms that will face the challenge of keeping harmful content off their sites, such as Meta Platforms, TikTok and X, formerly known as Twitter. (Also Read: You Can Now Remix YouTube Music Videos In Shorts – Here’s How!)

    The agreement includes commitments to collaborate on developing tools for detecting misleading AI-generated images, video and audio, creating public awareness campaigns to educate voters on deceptive content and taking action on such content on their services.

    Technology to identify AI-generated content or certify its origin could include watermarking or embedding metadata, the companies said. The accord did not specify a timeline for meeting the commitments or how each company would implement them.

    “I think the utility of this (accord) is the breadth of the companies signing up to it,” said Nick Clegg, president of global affairs at Meta Platforms. “It’s all good and well if individual platforms develop new policies of detection, provenance, labeling, watermarking and so on, but unless there is a wider commitment to do so in a shared interoperable way, we’re going to be stuck with a hodgepodge. of different commitments,” Clegg said.

    Generative AI is already being used to influence politics and even convince people not to vote. In January, a robocall using fake audio of US President Joe Biden circulated to New Hampshire voters, urging them to stay home during the state’s presidential primary election.

    Despite the popularity of text-generation tools like OpenAI’s ChatGPT, the tech companies will focus on preventing harmful effects of AI photos, videos and audio, partly because people tend to have more skepticism with text, said Dana Rao, Adobe’s chief trust officer, in an interview.

    “There’s an emotional connection to audio, video and images,” he said. “Your brain is wired to believe that kind of media.”

  • Google’s 25 Million Euros Investment Aims To Enhance AI Skills For Europeans |

    By clicking “Accept All Cookies”, you agree to the storing of cookies on your device and the processing of information obtained via those cookies (including about your preferences, device and online activity) by us and our commercial partners to enhance site navigation, personalize ads, analyze site usage, and assist in our marketing efforts. More information can be found in our Cookies and Privacy Policy. You can amend your cookie settings to reject non-essential cookies by clicking Cookie Settings below.

    Accept All Cookies

    Manage Consent Preferences

  • ChatGPT Users Can Now Bring GPTs Into Any Conversation In OpenAI

    OpenAI is currently only offering the ability to browse, create and use GPTs to its paying customers.

  • ‘AI Girlfriends’ Flood GPT Store Shortly After Launch, OpenAI Rules Breached |

    New Delhi: OpenAI’s recently launched GPT store is encountering difficulties with moderation just a few days after its debut. The platform provides personalized editions of ChatGPT, but certain users are developing bots that violate OpenAI’s guidelines.

    These bots, with names such as “Your AI companion, Tsu,” enable users to personalize their virtual romantic companions, violating OpenAI’s restriction on bots explicitly created for nurturing romantic relationships.

    The company is actively working to address this problem. OpenAI revised its policies when the store was introduced on January 10, 2023. However, the violation of policy on the second day highlights the challenges associated with moderation.

    With the growing demand for relationship bots, it’s adding a layer of complexity to the situation. As reported, seven out of the 30 most downloaded AI chatbots were virtual friends or partners in the United States previous year. This trend is linked to the prevailing loneliness epidemic.

    To assess GPT models, OpenAI states that it uses automated systems, human reviews and user reports to assess GPT models applying warnings or sales bans for those considered harmful. However, the continued presence of girlfriend bots in the market raises doubts about the effectiveness of this assertion.

    The difficulty in moderation reflects the common challenges experienced by AI developers. OpenAI has faced issues in implementing safety measures for previous models such as GPT-3. With the GPT store available to a wide user audience, the potential for insufficient moderation is a significant concern.

    Other technology companies are also swiftly handling problems with their AI systems, understanding the significance of quick action in the growing competition. Yet, the initial breaches highlight the significant challenges in moderation that are expected in the future.

    Even within the specific environment of a specialized GPT store, managing narrowly focused bots seems to be a complicated task. As AI progresses, ensuring their safety is set to become more complex.