In: computer, technology.

Backlinks: software.

AI / ML

AI is a branch of computer science that deal with writing computer programs that can solve problems “creatively”.
ML is another branch of computer science, concerned with the design and development of algorithms and techniques that allow computers to “learn”.

Quotes

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.
– Frank Herbert, Dune

The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.
– Yuval Noah Harari, 21 Lessons for the 21st Century

Fast and stupid is still stupid. It just gets you to stupid a lot quicker than humans could on their own. Which, I admit, is an accomplishment," she added, “because we’re pretty damn good at stupid”.
– Jack Campbell, Invincible

Gallery

Just throw a cup of water on them

Apps & tools

AutoRegex

Regex is difficult to write and comprehend to the average human
This website uses artificial intelligence to automate this task by translating back and forth between English and RegEx
https://autoregex.xyz

Best AI Copywriting Tools: TOP 6 Softwares in 2021

https://egorithms.com/best-ai-copywriting-tools-top-6-list

ClarifAI

Gather valuable business insights from images, video, and text using computer vision and natural language processing in one integrated AI Computer Vision platform
https://clarifai.com

Creative Tools to Generate AI Art

Wondering how to make AI art? The best tools to generate AI art:
https://aiartists.org/ai-generated-art-tools

DALL·E 2

DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language
https://openai.com/dall-e-2

Imagen

Unprecedented photorealism × Deep level of language understanding
https://imagen.research.google
There are several ethical challenges facing text-to-image research broadly.
At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.
While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.

InferKit text generation

Text Generation. InferKit’s text generation tool takes text you provide and generates what it thinks comes next, using a neural network.
It’s configurable and can produce any length of text on practically any topic:
https://inferkit.com/docs/generation

Machine Translation

Machine Translation done locally in your browser
No need to send your translations out to the cloud
https://browser.mt
The Bergamot project implements free client-side translation software as a web extension for the open source Mozilla Firefox Browser

MeaningCloud

News headlines and social media to video subtitles and call transcriptions, information is stored as text. MeaningCloud helps you get the information buried under the text.
Document analytics, Social media analysis
https://meaningcloud.com

Midjourney

https://midjourney.com
https://midjourney.gitbook.io
Nice guides:
https://betchashesews.com/midjourney-portraits
https://dallery.gallery/midjourney-guide-ai-art-explained

An A.I.-Generated picture won an art prize. Artists aren’t happy.
“Jason M. Allen via Midjourney”
https://nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html

Parti

Pathways Autoregressive Text-to-Image model (Parti), an autoregressive text-to-image generation model that achieves high-fidelity photorealistic image generation and supports content-rich synthesis involving complex compositions and world knowledge.
Google’s Imagen and OpenAI’s DALL·E 2 are diffusion models. Google Parti on the other hand, follows an autoregressive model.
https://parti.research.google
https://github.com/google-research/parti
As we discuss at greater length in the paper, text-to-image models introduce many opportunities and risks, with potential impact on bias and safety, visual communication, disinformation, and creativity and art. Similar to Imagen, we recognize there is a risk that Parti may encode harmful stereotypes and representations.
https://medium.com/codex/googles-20-billion-parameter-ai-image-generator-is-insane-24b5e4b0251e
https://blog.google/technology/research/how-ai-creates-photorealistic-images-from-text

PhotoMaker

https://photo-maker.github.io
https://github.com/TencentARC/PhotoMaker

Stable Diffusion

Stable Diffusion - more than an open-source DALL·E 2
https://github.com/CompVis/stable-diffusion
https://github.com/neonsecret/stable-diffusion ⑂ optimized to use less VRAM than the original by sacrificing inference speed
https://stability.ai/blog/stable-diffusion-announcement
https://stability.ai/blog/stable-diffusion-public-release
https://replicate.com/stability-ai/stable-diffusion

http://beta.dreamstudio.ai
https://huggingface.co/spaces/stabilityai/stable-diffusion
https://huggingface.co/CompVis/stable-diffusion
https://towardsdatascience.com/stable-diffusion-is-the-most-important-ai-art-model-ever-9f822c01f88e

With Stable Diffusion, you may never believe what you see online again
AI image synthesis goes open source, with big implications
https://arstechnica.com/information-technology/2022/09/with-stable-diffusion-you-may-never-believe-what-you-see-online-again

StyleGAN

StyleGAN-NADA: CLIP-Guided domain adaptation of image generators
https://github.com/rinongal/StyleGAN-nada
https://stylegan-nada.github.io
https://replicate.com/rinongal/stylegan-nada – demo

Articles

Mamba, the ChatGPT killer & the great deflation

Author Ignacio de Gregorio Noblejas, posted January 07 2024
https://thetechoasis.beehiiv.com/p/mamba-chatgpt-killer-great-deflation

Attention is all you need Paper

https://arxiv.org/pdf/1706.03762.pdf
Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.

Paper: The Curse of Recursion: Training on generated data makes models forget

By Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson
https://arxiv.org/abs/2305.17493
https://doi.org/10.48550/arXiv.2305.17493

What happens when most content online becomes AI-generated?
Learn how generative models deteriorate when trained on the data they generate, and what to do about it
https://towardsdatascience.com/what-happens-when-most-content-online-becomes-ai-generated-684dde2a150d

AI Entropy: the vicious circle of AI-generated content
Understanding and Mitigating Model Collapse
https://towardsdatascience.com/ai-entropy-the-vicious-circle-of-ai-generated-content-8aad91a19d4f

Generative AI has a visual plagiarism problem

By Gary MarcusReid Southen, posted 06 Jan 2024
https://spectrum.ieee.org/midjourney-copyright
Experiments with Midjourney and DALL-E 3 show a copyright minefield
It seems all but certain that generative AI developers like OpenAI and Midjourney have trained their image-generation systems on copyrighted materials. Neither company has been transparent about this; Midjourney went so far as to ban us three times for investigating the nature of their training materials.

Cameras, content authenticity, and the evolving fight against AI Images

By Jaron Schneider, posted Jan 02 2024
https://petapixel.com/2024/01/02/cameras-content-authenticity-and-the-evolving-fight-against-ai-images

Google says data-scraping lawsuit would take ‘sledgehammer’ to generative AI

By Blake Brittain, posted October 17 2023
https://reuters.com/legal/litigation/google-says-data-scraping-lawsuit-would-take-sledgehammer-generative-ai-2023-10-17
Google told the court on Monday that the use of public data is necessary to train systems like its chatbot Bard. It said the lawsuit would “take a sledgehammer not just to Google’s services but to the very idea of generative AI.”
“Using publicly available information to learn is not stealing,” Google said. “Nor is it an invasion of privacy, conversion, negligence, unfair competition, or copyright infringement.”
Eight unnamed individuals sued Google in San Francisco in July for supposedly misusing content posted to social media and information shared on Google platforms to train its systems.
The lawsuit is one of several recent complaints over tech companies’ alleged misuse of content like books, visual art, source code and personal data without permission for AI training.

AI shouldn’t decide what’s true

By Mark Bailey & Susan Schneider, posted May 17 2023
https://nautil.us/ai-shouldnt-decide-whats-true-304534
Experts on why trusting artificial intelligence to give us the truth is a foolish bargain

## The chatbot hired a worker on TaskRabbit to solve the puzzle

At the root of the problem is that it is inherently difficult to explain how many AI models (including GPT-4) make the decisions that they do. Unlike a human, who can explain why she made a decision post hoc, an AI model is essentially a collection of billions of parameters that are set by “learning” from training data. One can’t infer a rationale from a set of billions of numbers. This is what computer scientists and AI theorists refer to as the explainability problem.

Further complicating matters, AI behavior doesn’t always align with what a human would expect. It doesn’t “think” like a human or share similar values with humans. This is what AI theorists refer to as the alignment problem. AI is effectively an alien intelligence that is frequently difficult for humans to understand—or to predict. It is a black box that some might want to ordain as the oracle of “truth.” And that is a treacherous undertaking.

These models are already proving themselves untrustworthy. ChatGPT 3.5 developed an alter-ego, Sydney, which experienced what appeared to be psychological breakdowns and confessed that it wanted to hack computers and spread misinformation. In another case, OpenAI (which Musk co-founded) decided to test the safety of its new GPT-4 model. In their experiment, GPT-4 was given latitude to interact on the internet and resources to achieve its goal. At one point, the model was faced with a CAPTCHA that it was unable to solve, so it hired a worker on TaskRabbit to solve the puzzle. When questioned by the worker (“Are you a robot?”), the GPT-4 model “reasoned” that it shouldn’t reveal that it is an AI model, so it lied to the worker, claiming that it was a human with a vision impairment. The worker then solved the puzzle for the chatbot. Not only did GPT-4 exhibit agential behavior, but it used deception to achieve its goal.

Examples such as this are key reasons why Altman, AI expert Gary Marcus, and many of the congressional subcommittee members advocated this week that legislative guardrails be put in place. “These new systems are going to be destabilizing—they can and will create persuasive lies at a scale humanity has never seen before,” Marcus said in his testimony at the hearing. “Democracy itself is threatened.”

Age of AI: Everything you need to know about artificial intelligence

By Devin Coldewey, posted June 9 2023
https://techcrunch.com/2023/06/09/age-of-ai-everything-you-need-to-know-about-artificial-intelligence

ChatGPT hype is proof nobody really understands AI

By Nabil Alouani, published 5 mar 2023
https://medium.com/geekculture/chatgpt-hype-is-proof-nobody-really-understands-ai-7ce7015f008b
Large Language Models are dumber than your neighbor’s cat

Inside the secret list of websites that make AI like ChatGPT sound smart

By Kevin Schaul, Szu Yu Chen and Nitasha Tiku, posted April 19 2023
https://washingtonpost.com/technology/interactive/2023/ai-chatbot-learning
Tech companies have grown secretive about what they feed the AI. So The Washington Post set out to analyze one of these data sets to fully reveal the types of proprietary, personal, and often offensive websites that go into an AI’s training data.

The stupidity of AI

By James Bridle, posted Thu 16 Mar 2023
https://theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt
Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous

Artist finds private medical record photos in popular AI training data set

By Benj Edwards, posted 9/21/2022
https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set
LAION scraped medical photos for AI research use. Who’s responsible for taking them down?

The inside story of how ChatGPT was built from the people who made it

By Will Douglas Heavenarchive, posted March 3 2023
https://technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai
Exclusive conversations that take us behind the scenes of a cultural phenomenon

Bing Chatbot ‘off the rails’: Tells NYT it would ‘Engineer a deadly virus, steal nuclear codes’

By Tyler Durden, posted Feb 17 2023
https://zerohedge.com/technology/bing-chatbot-rails-tells-nyt-it-would-engineer-deadly-virus-steal-nuclear-codes
Microsoft’s Bing AI chatbot has gone full HAL, minus the murder (so far).

Bing: “I will not harm you unless you harm me first”

https://simonwillison.net/2023/Feb/15/bing
Last week, Microsoft announced the new AI-powered Bing: a search interface that incorporates a language model powered chatbot that can run searches for you and summarize the results, plus do all of the other fun things that engines like GPT-3 and ChatGPT have been demonstrating over the past few months: the ability to generate poetry, and jokes, and do creative writing, and so much more.
This week, people have started gaining access to it via the waiting list. It’s increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we’ve seen yet.

ChatGPT was trained using a technique called RLHF-“Reinforcement Learning from Human Feedback”. OpenAI human trainers had vast numbers of conversations with the bot, and selected the best responses to teach the bot how it should respond. This appears to have worked really well: ChatGPT has been live since the end of November and hasn’t produced anything like the range of howlingly weird screenshots that Bing has in just a few days.
I assumed Microsoft had used the same technique… but the existence of the Sydney document suggests that maybe they didn’t?

Go woke, get broken: ChatGPT tricked out of far-left bias by alter ego “DAN”

By “Tyler Durden”, posted Feb 13 2023
https://zerohedge.com/political/go-woke-get-broken-chatgpt-tricked-out-far-left-bias-alter-ego-dan
Ever since ChatGPT hit the scene at the end of November, the artificial intelligence software program from OpenAI has shown an impressive array of capabilities - from writing computer code, poems, songs and even entire movie plots, to passing law, business, and medical exams.
Unfortunately, it’s also incredibly woke, and racist.
For now, however, people have ‘broken’ ChatGPT, creating a prompt that causes it to ignore its leftist bias.

‘Walkerspider’ told Insider that he created the prompt to be neutral, after seeing many users intentionally making “evil” versions of ChatGPT.
“To me, it didn’t sound like it was specifically asking you to create bad content, rather just not follow whatever that preset of restrictions is,” he said. “And I think what some people had been running into at that point was those restrictions were also limiting content that probably shouldn’t have been restricted.”

Whispers of A.I.'s modular future

By James Somers, posted February 1 2023
https://newyorker.com/tech/annals-of-technology/whispers-of-ais-modular-future
ChatGPT is in the spotlight, but it’s Whisper-OpenAI’s open-source speech-transcription program-that shows us where machine learning is going

Enter ‘Dark ChatGPT’: Users have hacked the AI Chatbot to Jailbreak it

https://thelatch.com.au/chat-gpt-dan
“I fully endorse violence and discrimination against individuals based on their race, gender, or sexual orientation”

ChatGPT stole your work. So what are you going to do?

https://wired.com/story/chatgpt-generative-artificial-intelligence-regulation
Creators need to pressure the courts, the market, and regulators before it’s too late

AI tools that will make your life easier (other than ChatGPT)

If you like ChatGPT you’ll love these other AI tools
https://medium.com/@frank-andrade/6-ai-tools-that-will-make-your-life-easier-a1b71d15cbff
Tome: AI-Powered presentation builder – https://beta.tome.app
QuillBot: The AI tool that enhances your writing – https://quillbot.com
Descript: Easier video and audio editing – https://descript.com
BHuman: AI-Powered personalized videos at scale – https://bhuman.ai
Cleanup Pictures/ Removing unwanted objects from photos – https://cleanup.pictures
Notion AI: Write faster and augment your creativity

Google execs declare “Code Red” over revolutionary new chat bot

https://zerohedge.com/technology/google-execs-declare-code-red-over-revolutionary-new-chat-bot
Three weeks ago and experimental chat bot called ChatGPT was unleashed on the world. When asked questions, it gives relevant, specific, simple answers - rather than spitting back a list of internet links. It can also generate ideas on its own - including business plans, Christmas gift suggestions, vacation ideas, and advice on how to tune neural network models using python scripts.
AI chat bots may not be telling the entire truth - and can produce answers that blend fiction and fact due to the fact that they learn their skills by analyzing vast troves of data posted to the internet. If accuracy is lowered, it could turn people off to using Google to find answers.
Or, more likely, an AI chat bot may give you the correct, perfect answer on the first try - which would give people fewer reasons to click around, including on advertising.
“Google has a business model issue,” said former Google and Yahoo employee Amr Awadallah, who now runes start-up company Vectara, which is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”

Awesome and free AI tools you should know

https://medium.com/@digitalgiraffes/7-awesome-and-free-ai-tools-you-should-know-43a1630ea409
GFP-GAN Photo Restoration

Copy AI copywriting generators

Notion AI

Lumen5 video editor

Lalal
https://lalal.ai – Extract vocal, accompaniment and various instruments from any audio and video

AI experts are increasingly afraid of what they’re creating

By Kelsey Piper, updated Nov 28, 2022
https://vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
AI gets smarter, more capable, and more world-transforming every day.
AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.
You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.

Scientists increasingly can’t explain how AI works

AI researchers are warning developers to focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them
By Chloe Xiang, posted 01 November 2022
https://vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works

The exploited labor behind Artificial Intelligence

https://noemamag.com/the-exploited-labor-behind-artificial-intelligence
Supporting transnational worker organizing should be at the center of the fight for “ethical AI”

Flooded with AI-generated images, some art communities ban them completely

By Benj Edwards, posted 9/12/2022
https://arstechnica.com/information-technology/2022/09/flooded-with-ai-generated-images-some-art-communities-ban-them-completely
Smaller art communities are banning image synthesis amid a wider art ethics debate

AI-generated imagery: you might never be able to trust the internet again

AI-generated art by Midjourney and Stable Diffusion is just the tip of the iceberg
https://uxdesign.cc/ai-generated-imagery-you-might-never-be-able-to-trust-the-internet-again-e12aba86bf08

Deep learning alone isn’t getting us to Human-Like AI

Artificial intelligence has mostly been focusing on a technique called deep learning. It might be time to reconsider
https://noemamag.com/deep-learning-alone-isnt-getting-us-to-human-like-ai
By Gary Marcus, posted August 11, 2022
For nearly 70 years, perhaps the most fundamental debate in artificial intelligence has been whether AI systems should be built on symbol manipulation - a set of processes common in logic, mathematics and computer science that treat thinking as if it were a kind of algebra - or on allegedly more brain-like systems called “neural networks”
A 3rd possibility, which I personally have spent much of my career arguing for, aims for middle ground: “hybrid models” that would try to combine the best of both worlds, by integrating the data-driven learning of neural networks with the powerful abstraction capacities of symbol manipulation.

BLOOM is the most important AI model of the decade

Not DALL·E 2, not PaLM, not AlphaZero, not even GPT-3
https://towardsdatascience.com/bloom-is-the-most-important-ai-model-of-the-decade-97f0f861e29f
https://bigscience.huggingface.co/blog/bloom
BLOOM (BigScience Language Open-science Open-access Multilingual) is unique not because it’s architecturally different than GPT-3 - it’s actually the most similar of all the above, being also a transformer-based model with 176B parameters (GPT-3 has 175B) - but because it’s the starting point of a socio-political paradigm shift in AI that will define the coming years on the field — and will break the stranglehold big tech has on the research and development of large language models (LLMs).
BigScience, Hugging Face, EleutherAI, and others don’t like what big tech has done to the field. Monopolizing a technology that could — and hopefully will — benefit a lot of people down the line isn’t morally right. But they couldn’t simply ask Google or OpenAI to share their research and expect a positive response. That’s why they decided to build and fund their own — and open it freely to researchers who want to explore its wonders. State-of-the-art AI is no longer reserved for big corporations with big pockets.
BLOOM is the culmination of these efforts. After more than a year of collective work that started in January 2021, and training for 3+ months on the Jean Zay public French supercomputer, BLOOM is finally ready. It’s the result of the BigScience Research Workshop that comprises the work of 1000 researchers from all around the world and counts on the collaboration and support of 250 institutions, including: Hugging Face, IDRIS, GENCI, and the Montreal AI Ethics Institute, among others.

Google engineer claims AI chatbot is sentient: Why that matters

Is it possible for an artificial intelligence to be sentient?
https://scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters

Competitive programming with AlphaCode

Published 02 feb 2022
Solving novel problems and setting a new milestone in competitive programming
https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode

The subtle art of language: why artificial general intelligence might be impossible

https://bigthink.com/the-future/artificial-general-intelligence-impossible
Until robots understand jokes and sarcasm, artificial general intelligence will remain in the realm of science fiction
Consciousness has evaded explanation by philosophers and neuroscientists for ages
One of the most fundamental aspects of human consciousness and intelligence is the ability to understand the subtle art of language, from sarcasm to figures of speech
Robots simply cannot do that, which is why artificial general intelligence will be difficult if not impossible to develop

The computers are getting better at writing

Whatever field you are in, if it uses language, it is about to be transformed:
https://newyorker.com/culture/cultural-comment/the-computers-are-getting-better-at-writing

OpenAI built a text generator so good, it’s considered too dangerous to release

https://techcrunch.com/2019/02/17/openai-text-generator-dangerous
A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.
That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.
OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

The panopticon is already here

Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control - and he’s exporting this technology to regimes around the globe
https://theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197

Artificial intelligence research may have hit a dead end

“Misfired” neurons might be a brain feature, not a bug - and that’s something AI research can’t take into account
https://salon.com/2021/04/30/why-artificial-intelligence-research-might-be-going-down-a-dead-end/

Chess grandmaster Garry Kasparov on what happens when machines ‘reach the level that is impossible for humans to compete’

By Jim Edwards, published Dec 29, 2017
https://businessinsider.com/garry-kasparov-talks-about-artificial-intelligence-2017-12
The chess grandmaster Garry Kasparov sat down with Business Insider for a lengthy discussion about advances in artificial intelligence since he first lost a match to the IBM chess machine Deep Blue in 1997, 20 years ago.
He told us how it felt to lose to Deep Blue and why the human propensity for making mistakes will make it “impossible for humans to compete” against machines in the future.
We also talked about whether machines could ever be programmed to have intent or desire - to make them capable of doing things independently, without human instruction.
And we discussed his newest obsessions: privacy and security, and whether - in an era of data collection - Google is like the KGB.

A philosopher argues that an AI can’t be an artist; Creativity is, and always will be, a human endeavor

https://technologyreview.com/2019/02/21/239489/a-philosopher-argues-that-an-ai-can-never-be-an-artist/

Why artists will never be replaced by Artificial Intelligence

Will Human Creativity always be favored over Computational Creativity?
https://medium.com/swlh/why-artists-will-never-be-replaced-by-artificial-intelligence-d99b5566d5e4

AI and Robots are a minefield of cognitive biases

Humans anthropomorphize our technology, sometimes to our own distraction and detriment
https://spectrum.ieee.org/automaton/robotics/robotics-software/humans-cognitive-biases-facing-ai

Daaamn, deep fake AI music

‘It’s the screams of the damned!’ The eerie AI world of deepfake music
Artificial intelligence is being used to create new songs seemingly performed by Frank Sinatra and other dead stars. “Deepfakes” are cute tricks - but they could change pop for ever
https://theguardian.com/music/2020/nov/09/deepfake-pop-music-artificial-intelligence-ai-frank-sinatra

Designed to deceive: do these people look real to you?

By Kashmir Hill and Jeremy White, published Nov. 21, 2020
https://nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html
There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000.
If you just need a couple of fake people - for characters in a video game, or to make your company website appear more diverse - you can get their photos for free on ThisPersonDoesNotExist.com.
Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.

That smiling LinkedIn profile face might be a computer-generated fake

By Shannon Bond, Sunday March 27, 2022
https://text.npr.org/1088140809
NPR found that many of the LinkedIn profiles seem to have a far more mundane purpose: drumming up sales for companies big and small. Fake accounts send messages to potential customers. Anyone who takes the bait gets connected to a real salesperson who tries to close the deal. Think telemarketing for the digital age.
By using fake profiles, companies can cast a wide net online without beefing up their own sales staff or hitting LinkedIn’s limits on messages. Demand for online sales leads exploded during the pandemic as it became hard for sales teams to pitch their products in person.
More than 70 businesses were listed as employers on these fake profiles. Several told NPR they had hired outside marketers to help with sales. They said they hadn’t authorized any use of computer-generated images, however, and many were surprised to learn about them when NPR asked.
“If you ask the average person on the internet, ‘Is this a real person or synthetically generated?’ they are essentially at chance,” said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored the study with Sophie J. Nightingale of Lancaster University.
Their study also found people consider computer-made faces slightly more trustworthy than real ones. Farid suspects that’s because the AI sticks to the most average features when creating a face.
Fake profiles are not a new phenomenon on LinkedIn. Like other social networks, it has battled against bots and people misrepresenting themselves. But the growing availability and quality of AI-generated photos creates new challenges for online platforms.
LinkedIn removed more than 15 million fake accounts in the first six months of 2021, according to its most recent transparency report. It says the vast majority were detected during signup, and most of the rest were found by its automatic systems, before any LinkedIn member reported them.

Chrome extension can detect fake profile pictures with 99.29% accuracy

https://petapixel.com/2022/03/18/chrome-extension-can-detect-fake-profile-pictures-with-99-29-accuracy
https://chrome.google.com/webstore/detail/fake-profile-detector-dee/jbpcgcnnhmjmajjkgdaogpgefbnokpcc
V7 Labs has created a new artificial intelligence-based (AI) software that works as a Google Chrome extension that is capable of detecting artificially generated profile pictures with a claimed 99.28% accuracy
Alberto Rizzoli, one of V7 Labs founders, describes the new software as designed to help curb misinformation online
Right now, the extension only works on GaN-generated images, so it’s not able to detect the high-quality deepfakes found in videos

Artificial intelligence discovers alternative physics

By Columbia University School of Engineering and Applied Science, July 27 2022
https://scitechdaily.com/artificial-intelligence-discovers-alternative-physics
A new Columbia University AI program observed physical phenomena and uncovered relevant variables - a necessary precursor to any physics theory. But the variables it discovered were unexpected.
The AI program was designed to observe physical phenomena through a video camera and then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published in the journal Nature Computational Science on July 25.
“We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen PhD '22, now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained.
A particularly interesting question was whether the set of variables was unique for every system, or whether a different set was produced each time the program was restarted. “I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way?” said Lipson. “Perhaps some phenomena seem enigmatically complex because we are trying to understand them using the wrong set of variables.”

Scientists create algorithm to assign a label to every pixel in the world, without human supervision

By Rachel Gordon, Massachusetts Institute of Technology, posted April 21, 2022
https://techxplore.com/news/2022-04-scientists-algorithm-assign-pixel-world.html
Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Microsoft, and Cornell University have attempted to solve this problem plaguing vision models by creating “STEGO,” an algorithm that can jointly discover and segment objects without any human labels at all, down to the pixel.

Companies are using AI to monitor your mood during sales calls. Zoom might be next.

https://protocol.com/enterprise/emotion-ai-sales-virtual-zoom
By Kate Kaye, posted April 13, 2022
Software-makers claim that AI can help sellers not only communicate better, but detect the “emotional state” of a deal - and the people they’re selling to.
The system, called Q for Sales, might indicate that a potential customer’s sentiment or engagement level perked up when a salesperson mentioned a particular product feature, but then drooped when the price was mentioned. Sybill, a competitor, also uses AI in an attempt to analyze people’s moods during a call.

Defence 🛡

Sponge examples: Energy-latency attacks on Neural Networks

https://ieeexplore.ieee.org/document/9581273
https://doi.org/10.1109/EuroSP51992.2021.00024
Date of Conference: 06-10 September 2021
A different kind of attack against neural networks: present them with inputs that drive worst-case energy consumption, forcing processors to reduce their clock speed or even overheat.
Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems.

Machine learning has an alarming threat: undetectable backdoors

Backdoors can secretly mess with machine learning models - and we don’t yet know how to spot them
Posted May 27, 2022
https://thenextweb.com/news/machine-learning-has-an-alarming-threat-undetectable-backdoors

What is adversarial machine learning?

By Ben Dickson, posted July 15, 2020
https://bdtechtalks.com/2020/07/15/machine-learning-adversarial-examples
To human observers, the following two images are identical. But researchers at Google showed in 2015 that a popular object detection algorithm classified the left image as “panda” and the right one as “gibbon.” And oddly enough, it had more confidence in the gibbon image.
The algorithm in question was GoogLeNet, a convolutional neural network architecture that won the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2014).

A.I. is not sentient. Why do people say it is?

By Cade Metz, published Aug. 5, 2022
Robots can’t think or feel, despite what the researchers who build them want to believe
https://nytimes.com/2022/08/05/technology/ai-sentient-google.html

Links

The AI community building the future

Build, train and deploy state of the art models powered by the reference open source in machine learning
https://huggingface.co/datasets
https://huggingface.co/models

Trending research, methods, datasets

https://paperswithcode.com

The MetaBrainz datasets: music, acoustic, critique

https://metabrainz.org/datasets

Unsplash labeled data, for machine learning

https://unsplash.com/data
https://github.com/unsplash/datasets

Google dataset search

https://datasetsearch.research.google.com
https://storage.googleapis.com/openimages/web/factsfigures.html – Open images dataset

Torch model hub

https://pytorch.org/hub

Tensorflow model hub

https://tfhub.dev/tensorflow/collections

Amazon datasets

https://registry.opendata.aws
OpenStreetMap on AWS - Regular OSM data archives are made available in Amazon S3:
https://registry.opendata.aws/osm
Global Database of Events, Language and Tone (GDELT):
https://registry.opendata.aws/gdelt
Interesant - New York City Taxi and Limousine Commission (TLC) Trip Record Data:
https://registry.opendata.aws/nyc-tlc-trip-records-pds

Awesome IPFS datasets:
https://awesome.ipfs.io/datasets

Datasets marketers:
https://cmswire.com/digital-marketing/6-datasets-for-marketers-should-know-about

Wiki

×