January 17, 2024

ChatGPT vs. Google Bard: Which Tool Is Best for Text Summarization?

Chat GPT VS. Google Bard
According to Ipsos Global Advisor’s global survey (July 2023), 54% of the respondents consider AI exciting. This word perfectly reflects the complex public’s attitude towards technology: positive in general, inquisitive, yet slightly guarded. The strategic evaluation of AI’s role in humanity is ahead. (If AI doesn’t take over the world, of course. [smile])

As a team that creates AI-powered products for widespread use in business, specifically for customer service teams and data-driven decision-making, we’re interested in the operational value of AI-based tools. We’re not alone in this aspiration, of course. There are plenty of experiments, trials, and app comparisons described in publications in detail. However, the level of curiosity is so high, and the number of areas of tools’ application is so huge, that there is still room for contributing to the topic.

Hence, we’ve decided to conduct a series of trials, or “battles”, for the most popular AI-based tools. Being far from the attempts to make the ultimate conclusions, we’re eager to test some tools for some tasks - endeavors that we all face in everyday life.

Our goal is to get our hands dirty and share our experience in a measurable form.

We’re starting in a minute our first experiment, “Chat GPT vs. Bard”!

The Introduction

Let’s take a quick glimpse at the participants, the task, and our approach to comparison.

Contenders

In today’s series, our contestants are Bard and GPT-4. Let us introduce them at first:

  • Bard is Google's conversational AI service, designed to synthesize complex information, including AI-generated content, and provide intuitive responses. It utilizes data from the web to present fresh, high-quality content. Bard aims to assist users in creative and analytical tasks, enhancing user knowledge and decision-making processes.
  • GPT-4, developed by OpenAI, is a multimodal AI that understands text and images, delivering human-like text responses due to Natural Language Processing (NLP) technology. It's capable of complex understanding and generation of content, providing detailed explanations, and facilitating creative tasks. GPT serves a wide array of applications, from conversation to content creation.

Both systems could be used effectively to deal with today’s task.

The Task

For the competition, we chose the task that everyone faces in daily life. Summarization is the operation we need to get insights faster and with less effort. Here are just a few examples of how you can benefit from using Large Language Models (LLMs) to enhance your productivity and get rid of the monotony in your workflow:

  • quickly grasp the key points of news articles, reports, or other text materials;
  • identify the most important information in a set of research findings and extract the key takeaways;
  • create short and engaging summaries of your blog posts, articles, or other written content for social media or email marketing;
  • craft concise pitch decks or executive summaries to present your ideas to investors or clients;
  • get a quick overview of long emails or messages without having to read everything word-for-word.

As a basis for our assessment, we took the report “Customer Service Excellence 2023 (International Edition)” by Deloitte. The report offers an analysis of customer service, presenting extensive research, trends, and the influence of AI in Europe.

“Methodology”

We put “methodology” in quotes because our survey “ChatGPT vs. Google Bard” isn’t scientific, of course; it’s rather hands-on and practice-oriented. Yet, we adhered to the primer principle of comparison: we gave both tools the same task to make results measurable and comparable.

Round 1. Downloading the Document

Let’s start with a simple operation yet vital for user experience - feeding the document to an assistant. Working with plain text is too simple for experimenting, isn't it? Hence, we decided to use PDF and JPG formats to test AI-powered tools’ functionality. After all, we want to test not only the models' ability to 'think', but also to recognize text in 'non-native' formats.

Bard

Don’t try to download the file using the link; it doesn’t work. Suppose you ask the system to assist you with downloading the document. In that case, it’ll add confusion by stating that you can share the document previously downloaded on Google Drive or another website for file exchange. Yet, sharing the link still entails the error.

The right sequence of actions is as follows:

  • download the document (Word, JPG, or PDF) on Google Drive;
  • activate the Google Workspace extension (you can find here how to do it).

Now, the magic happens.

Ask the AI assistant to summarize the text. You have to mention that the document is placed on Google Drive. Also, you have to give Bard keywords so that the tool can distinguish a document from others. A hint:

  • firstly, ask the system to find the document using keywords in a prompt;
  • after the tool gives you the list of documents that meet the criteria, ask Bard to sum up the right one.
The screenshot of the promt for Bard on searching for a document on Google Drive
The screenshot of the promt for Bard on searching for a document on Google Drive

As soon as Bard finds the document, it will share with you its crux. The same way, you can ask Bard to summarize the email content. Note that the option is available only for private accounts, not business ones.

You can also use Bard’s interface to download JPG files with text and analyze them.
There is one meaningful detail about Bard when we talk about text analysis. If you try to download a file containing faces, the system won’t read it.

Here is the uploaded picture:

A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to read texts with faces
A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to read texts with faces

Here is Bard’s response:

The screenshot of Bard’s response to a promt with a picture with faces on it
The screenshot of Bard’s response to a promt with a picture with faces on it

Maybe we will be more successful with this task with Bard, which works on the Gemini Advanced model. We’ll see.

GPT-4

With GPT, everything is clear.

You can upload either the whole report in PDF or separate pages in JPG/PDF with one click. As soon as a document is downloaded, you’ll get your text distillation.

If you need to summarize emails, you should copy messages and organize them in a format that is accessible for GPT.

The table comparing ChatGPT’s and Bard’s functionality for downloading documents to summarize them
The table comparing ChatGPT’s and Bard’s functionality for downloading documents to summarize them

Round 2. Text Summarization

Our essential task is to find out which of the tool is more powerful at text summarization, ChatGPT vs. Google Bard. First, we’d like to experiment a bit to find out how assistants deal with different kinds of text. Then, we’ll provide the report summary made by both AI models.

Bard

For the test, we chose the picture from the report with a chart to make the task a little bit more complicated compared to text interpretation. Here is the page from the report:

A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to answer the certain questions after analyzing the text
A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to read texts with charts

It contains a description of the survey’s methodology, the definitions of the basic terms, and a picture explaining how to analyze charts with the survey’s results.

We got a completely adequate picture description:

Bard_vs_GPT_Article_5_5
The screenshot of Bard’s summarization of the report page with a chart

Still, there was not even a hint of a mention of the diagram.

Ok, what about summarizing the simplest infographic like that?

A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to read texts with infographics
A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to read texts with infographics

Bard has succeeded. Firstly, it created a summary of almost the same length as the original page; however, after having been asked to shorten the text, Bard provided the brief conclusion:

Bard_vs_GPT_Article_5_7The screenshot of Bard’s summarization of the report page with infographics
The screenshot of Bard’s summarization of the report page with infographics

And here is one more subtask. Can Bard answer certain questions provided that the report page contains the information needed? Let’s check!

Here is the report page:

A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to answer the certain questions after analyzing the text
A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to answer the certain questions after analyzing the text

We asked the question on the page’s content and estimated Bard’s answer to be correct:

A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to answer the certain questions after analyzing the text
A page from the report “Customer Service Excellance 2023 (International Edition)” by Deloitte to check Bard’s ability to answer the certain questions after analyzing the text

And now, the final task of making the whole report summary:

The screenshot illustrating Bard’s ability to summarize the whole report
The screenshot illustrating Bard’s ability to summarize the whole report

GPT-4

We used the same tasks and the same report pages to assess GPT-4 and make the conclusion on the topic “ChatGPT vs. Bard”.

The model succeeded completely with the first subtask. Here is ChatGPT’s response:

The screenshot illustrating Bard’s ability to summarize the whole report
The screenshot illustrating Bard’s ability to summarize the whole report

The page content is reflected correctly. The page description is comprehensive since the chart is mentioned, and its purpose is explained clearly.

What about summarizing the small schema?

The screenshot of ChatGPT’s answer to the promt asking to summarize the report page with infographics
The screenshot of ChatGPT’s answer to the promt asking to summarize the report page with infographics

Done!

It seems to us that GPT-4’s answer is more informative, but our estimation is based on the “yes/no” criterion in contrast to the quality assistance; that’s why the task is counted as accomplished with any additional scores.

The last subtask remains, and it’s asking GPT-4 the question about the page content. Here is the result:

The screenshot illustrating ChatGPT’s ability to answer the certain questions after analyzing the text

Done.

Finally, GPT’s description of the whole report:

The screenshot illustrating ChatGPT’s ability to summarize the whole report
The screenshot illustrating ChatGPT’s ability to summarize the whole report

To make sure that GPT-4 made a thorough analysis of the document, we asked it to add more details and extend the conclusion. The model predictably succeeded and provided a more in-depth description of the paper.

Final Score

Let’s sum up.

Today, GPT-4 is a winner!

Conclusion

We emphasized that GPT-4 won today, not accidentally. Tomorrow, we can compare two tools’ capabilities for writing articles, and, chances are, Bard will take the lead.

What is more, even this victory is conditional. GPT seems more versatile with its straightforward interface. In addition, it looks better at text recognition. (However, you must have a ChatGPT Plus subscription to enjoy the tool's slight edge.) On the other hand, Bard can analyze emails and documents while GPT can’t.

It’s easy to assume that the particular person’s selection of the tool for text summarization will depend on the situation. It’ll be easier to download the report or book in PDF through GPT’s interface, while for summing up documents on Google Drive or emails, it’s more wise to address the task to Bard.

We think that in such minor experiments that we’ve conducted, the goal isn’t to answer the question, “Which tool is better?” We considered our experiment (which is a part of the series) as a discovery that fosters learning more about tools’ capabilities and finding the most appropriate ways to use these tools for particular tasks.

What could be the piece of advice for end users who’d like to use AI tools effectively? For individual everyday tasks, you can try on the role of an explorer to pave your own way to task automation and enhanced productivity with AI tools assistance.

Things are more complex in the case of implementing AI tools in business. Here, you cannot rely on “blind” experiments. In this matter, the support and expertise of specialized professionals in machine learning are needed to adjust AI technologies and tools to your distinctive needs and goals.

Whether you need to enhance your customer support or speed up the analytical processes in your company, CoSupport AI team is ready to provide you with tech consulting services and build custom AI-powered solutions for your business.

Want to learn how

you can keep up with the latest trends and innovations in AI for knowledge-base management?

Please read our Privacy & Cookies Policy