Benton writes, “How big is BloombergGPT? Well, the company says it was trained on a corpus of more than 700 billion tokens (or word fragments). For context, GPT-3, released in 2020, was trained on about 500 billion. (OpenAI has declined to reveal any equivalent number for GPT-4, the successor released last month, citing ‘the competitive landscape.’)
“What’s in all that training data? Of the 700 million-plus tokens, 363 billion are taken from Bloomberg’s own financial data, the sort of information that powers its terminals — ‘the largest domain-specific dataset yet’ constructed, it says. Another 345 billion tokens come from ‘general purpose datasets’ obtained from elsewhere.
“The company-specific data, named FinPile, consists of ‘a range of English financial documents including news, filings, press releases, web-scraped financial documents, and social media drawn from the Bloomberg archives.’ So if you’ve read a Bloomberg Businessweek story in the past few years, it’s in there. So are SEC filings, Bloomberg TV transcripts, Fed data, and ‘other data relevant to the financial markets.'”
Read more here.
Bloomberg Law has hired Olivia Alafriz to cover insurance litigation and regulation. She is on the corporate…
Bloomberg Law has hired Lauren Clason to cover health benefits. She has been a health care reporter…
New York Times business editor Ellen Pollock sent out the following: I’m excited to announce: Mohammed Hadi…
Hannah Dreier, an investigative reporter at The New York Times, won a Pulitzer Prize for investigative reporting…
The Washington Business Journal has hired Ben Peters to cover commercial real estate. He has been the…
Bloomberg Radio has a rare opportunity for a motivated, hardworking Producer to contribute to it's…