Tag: artificial

Artificial Intelligence and Health

Artificial Intelligence and Health

Your Health and Artificial Intelligence.

How safe would you feel if you were diagnosed and treated by artificial intelligence by a machine, so to speak?

First let us take a look at the definition of artificial intelligence and what it is all about.

Artificial Intelligence and health


Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers “smart”. They work on their own without being encoded with commands. John McCarthy came up with the name “artificial intelligence” in 1955.

In general use, the term “artificial intelligence” means a machine which mimics human cognition. “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

There’s no doubt that technology has changed the way health care happens—everywhere.

“Even in rural Uganda, a patient can confirm her provider’s authenticity with a text message, and a community health worker can use Google to learn symptoms and treatments,” says Wayan Vota, IntraHealth’s director of digital health.

Now artificial intelligence (AI) and machine learning are changing the way we manage our health in and outside the clinic. We’re starting to see more instances where health workers and researchers can use AI to diagnose eye disease, depression, Alzheimer’s disease, and more.

And then there’s the DIY health tech. Want someone to talk to? There’s a chatbot therapist app for that. Want to have AI on your computer analyze your keystrokes and predict whether you’re developing a neurodegenerative disorder? You can sign up for that here. Want to track and record your every move to stay fit? Keep reading. All these new tools and applications are changing the way we take care of ourselves. They’re also generating scads of health data, which present their own challenges.

What is Artificial Intelligence in Healthcare?

Machine learning has the potential to provide data-driven clinical decision support (CDS) to physicians and hospital staff – paving the way for an
increased revenue potential. Machine learning, a subset of AI designed to identify patterns, uses algorithms and data to give automated insights to healthcare providers.

Examples of AI in Healthcare and Medicine

AI can improve healthcare by fostering preventative medicine and new drug discovery. Two examples of how AI is impacting healthcare include:

IBM Watson’s ability to pinpoint treatments for cancer patients, and Google Cloud’s Healthcare app that makes it easier for health organizations to collect, store, and access data.

Business Insider Intelligence reported that researchers at the University of North Carolina Lineberger Comprehensive Cancer Center used IBM Watson’s Genomic product to identify specific treatments for over 1,000 patients. The product performed big data analysis to determine treatment options for people with tumors who were showing genetic abnormalities.

Comparatively, Google’s Cloud Healthcare application programming interface (API) includes CDS offerings and other AI solutions that help doctors make more informed clinical decisions regarding patients. AI used in Google Cloud takes data from users’ electronic health records through machine learning –creating insights for healthcare providers to make better clinicaldecisions.

Google worked with the University of California, Stanford University, and the University of Chicago to generate an AI system that predicts the outcomes of hospital visits. This acts as a way to prevent readmission and shorten the amount of time patients are kept in hospitals.

Benefits, Problems, Risks & Ethics of AI in Healthcare

Integrating AI into the healthcare ecosystem allows for a multitude of benefits, including automating tasks and analyzing big patient data sets to deliver better healthcare faster, and at a lower cost. According to Business Insider Intelligence, 30% of healthcare costs are associated with administrative tasks. AI can automate some of these tasks, like pre-authorizing insurance, following-up on unpaid bills, and maintaining records, to ease the workload of healthcare professionals and ultimately save them money.

AI has the ability to analyze big data sets – pulling together patient insights and leading to predictive analysis. Quickly obtaining patient insights helps the healthcare ecosystem discover key areas of patient care that require improvement. Wearable healthcare technology also uses AI to better serve patients. Software that uses AI, like FitBits and smartwatches, can analyze data to alert users and their healthcare professionals on potential health issues and risks. Being able to assess one’s own health through technology eases the workload of professionals and prevents unnecessary hospital visitsor remissions.

FitBits use AI to analyze data to alert users and healthcare professionals on potential health risks

As with all things AI, these healthcare technology advancements are based on data humans provide –meaning, there is a risk of data sets containing unconscious bias. Previous experiences have shown that there is potential for coder bias and bias in machine learning to affect AI findings. In the sensitive healthcare market, especially, it will be critical to establish new ethics rules to address – and prevent – bias around AI.

Future of Artificial Intelligence in Healthcare

The use of AI in the healthcare market is growing due to the continued demand for wearable technology, digital medicine, and the industry’s overall transformation into the modern, digital age. Hospitals and healthcare professionals are seeing the benefits in using AI in technology and storing patients’ data on private clouds, like the Google Cloud Platform. AI allows doctors and patients to more easily access health records and assess patient’s health data that is recorded over a period via AI-infused technology

AI in technology and storing patients’ data on private clouds, like the Google Cloud Platform.

Health tech companies, startups, and healthcare professionals are discovering new ways to incorporate AI into the healthcare market; and, the speed at which we
improve the healthcare system through AI will only continue to accelerate as the industry dives deeper into digital health. Artificial intelligence in health care carries huge potential, according to experts in computer science and medicine, but it also raises serious questions around bias, accountability and security.

“I think we’re just seeing the tip of the iceberg right now,” said Yoshua Bengio, a computer scientist and professor at the University of Montreal, who was recently awarded the Turing Award, often called the “Nobel Prize” of computing. Bengio is one of the pioneers of deep learning, an advanced form of AI, which he believes will advance health care. In deep learning, a computer is fed data, which it uses to make assumptions and learn as it goes — much like our brain does.

Scientists are already using AI to develop medical devices. At the University of Alberta, researchers are testing an experimental bionic arm that can “learn” and anticipate the movements of an amputee. Last year, the U.S. Food and Drug Administration (FDA) approved a tool that can look at your retina and automatically detect signs of diabetic blindness.

Emergency Room Waiting Times

At Humber River Hospital in northwest Toronto, AI is speeding up perhaps the most frustrating part of a patient’s experience: the emergency room. In the hospital’s control center, powerful computers are now accurately predicting how many patients will arrive in the emergency department — two days in advance.

The software processes real-time data from all over the hospital— admissions, wait times, transfers and discharges — and analyzes it, going back over a year’s worth of information. From that, it can find patterns and pinpoint bottlenecks in the system. “If you add up all those tiny delays — how long it takes to see your doctor, how long you’re waiting for your bed to be cleaned, how long you’re waiting to get up to your room — if you measure all of those things and can shorten each one of them, you can start saving a lot of money,” said Dr. Michael Gardam, chief of staff at Humber River Hospital.

According to Gardam, it’s working: patients are now moving through the system faster, allowing the hospital to see an average of 29 more patients a day.

Risks With AI and Health

For machines to learn, they need vast amounts of information. Since that initial data comes from humans, some of that information can be tainted by personal bias  especially if the algorithm isn’t fed a diverse data set.

“In dermatology, you take a look at a number of different photographs or slides of moles. If you happen to be pale-skinned, some of the machine learning associated with that imagery is great. If you’re darker-skinned, it’s not,” said Dr. Jennifer Gibson, a bioethicist at the University of Toronto. She’s not against the integration of AI in health care, but warns that anything involving big data, profit-driven companies and health care should be heavily regulated.

“In our hunger for more data, in order to power these tools, we may be introducing a form of surveillance within our society — which is not really the intended goal,
but might happen accidentally,” Gibson said.

Gardam doesn’t share those concerns; he believes humans — not machines — will remain in control.”It’ll still be a long time before we fully accept information coming from a computer system, telling us what the diagnosis is,” he said. “Humans are still going to be reviewing it until we’re very comfortable we’re not
missing something.”

Some governments aren’t waiting for that to happen. In the U. S, the FDA recently announced that it is developing a framework for regulating self-learning AI products used in medicine. In a statement to CBC News, Health Canada said it also engaging with national, international, industry, academic and government stakeholders “to discuss the challenges and opportunities in regulating current and emerging AI technologies in health care.”

What The 21st Century is Bringing To Us As Far Health Care and AI

In the 21st Century, the age of big data and artificial intelligence (AI), each healthcare organization has built its own data infrastructure to support its own needs, typically involving on-premises computing and storage. Data is balkanized along organizational boundaries, severely constraining the ability to provide services to patients across a care continuum within one organization or across organizations.

This situation evolved as individual organizations had to buy and maintain the costly hardware and software required for healthcare, and has been reinforced by vendor lock-in, most notably in electronic medical records (EMRs). With increasing cost pressure and policy imperatives to manage patients across and between care episodes, the need to aggregate data across and between departments within a healthcare organization and across disparate organizations has become apparent not only to realize the promise of AI but also to improve the efficiency of existing data intensive tasks such as any population level segmentation and patient safety monitoring.

The rapid explosion in AI has introduced the possibility of using aggregated healthcare data to produce powerful models that can automate diagnosis and also enable an increased approach to medicine by tailoring treatments and targeting resources with maximum effectiveness in a timely and dynamic manner.

However, “the inconvenient truth” is that at present the algorithms that feature prominently in research literature are in fact not, for the most part, executable at the front lines of clinical practice. This is for two reasons: first, these AI innovations by themselves do not re-engineer the incentives that support existing ways of working.

A complex web of ingrained political and economic factors and the proximal influence of medical practice norms and commercial interests determine the way healthcare is delivered. Simply adding AI applications to a fragmented system will not create sustainable change. Second, most healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) “fit” the local population and/or the local practice patterns, a requirement prior to deployment that is rarely highlighted by current AI publications, and (b) interrogate them for bias to guarantee that the algorithms perform consistently across patient cohorts, especially those who may not have been adequately represented in the training cohort.

For example, an algorithm trained on mostly Caucasian patients is not expected to have the same accuracy when applied to minorities. In addition, such rigorous evaluation and re-calibration must continue after implementation to track and capture those patient demographics and practice patterns which inevitably change over time.

Some of these issues can be addressed through external validation, the importance of which is not unique to AI, and it is timely that existing standards for prediction model reporting are being updated specifically to incorporate standards applicable to this end. In the United States, there are islands of aggregated healthcare data in the ICU, and in the Veterans Administration. These aggregated data sets have predictably catalyzed an acceleration in AI development; but without broader development of data infrastructure outside these islands it will not be possible to generalize these innovations.

Airtificial Intelligence and Health Care. You Tube Video

The Google Cloud, Health and AI

Elsewhere in the economy, the development of cloud computing, secure high-performance general use data infrastructure and services available via the Internet (the “cloud”), has been a significant enabler for large and small technology companies alike, providing significantly lower fixed costs and higher performance and supporting the aforementioned opportunities for AI. Healthcare, with its abundance of data, is in theory well-poised to benefit from growth in cloud computing. The largest and arguably most valuable store of data in healthcare rests in EMRs. However, clinician satisfaction with EMRs remains low, resulting in variable completeness and quality of data entry, and interoperability between different providers remains elusive.

The typical lament of a harried clinician is still “why does my EMR still suck and why don’t all these systems just talk to each other?” Policy imperatives have attempted to address these dilemmas, however progress has been minimal. In spite of the widely touted benefits of “data liberation”, a sufficiently compelling use case has not been presented to overcome the vested interests maintaining the status quo and justify the significant upfront investment necessary to build data infrastructure.

Furthermore, it is reasonable to suggest that such high-performance computing work has been and continues to be beyond the core competencies of either healthcare organizations or governments and as such, policies have been formulated, but rarely, if ever, successfully implemented. It is now time to revisit these policy imperatives in light of the availability of secure, scalable data infrastructure available through cloud computing that makes the vision of interoperability realizable, at least in theory.

To realize this vision and to realize the potential of AI across health systems, more fundamental issues have to be addressed: who owns health data, who is responsible for it, and who can use it? Cloud computing alone will not answer these questions—public discourse and policy intervention will be needed. The specific path forward will depend on the degree of a social compact around healthcare itself as a public good, the tolerance to public private partnership, and crucially, the
public’s trust in both governments and the private sector to treat their healthcare data with due care and attention in the face of both commercial and political perverse incentives.

In terms of the private sector these concerns are amplified as cloud computing is provided by a few large technology companies who have both significant market power and strong commercial interests outside of healthcare for which healthcare data might potentially be beneficial. Specific contracting instruments are needed to ensure that data sharing involves both necessary protection and, where relevant, fair material returns to healthcare organizations and the patients they serve. In the absence of a general approach to contracting, high profile cases in this area have been corrosive to public trust.

Data privacy regulations like the European Union’s General Data Protection Regulation (GDPR) or California’s Consumer Privacy Act are necessary and well-intentioned, though incur the risk of favoring well-resourced incumbents who are more able to meet the cost of regulatory compliance thereby possibly limiting the growth of smaller healthcare provider and technology organizations.

Initiatives to give patients access to their healthcare data, including new proposals from the Center for Medicare and Medicaid Services are welcome, and in fact it has long been argued that patients themselves should be the owners and guardians of their health data and subsequently consent to their data being used to develop AI solutions.

In this scenario, as in the current scenario where healthcare organizations are the de-facto owners and guardians of patient data generated in the health system alongside fledgling initiatives from prominent technology companies to share patient generated data back into the health system, there exists the need for secure, high-performance data infrastructure to make use of this data for AI applications.

If the aforementioned issues are addressed, there are two possible routes to building the necessary data infrastructure to enable today’s clinical care and population health management and tomorrow’s AI enabled workflows. The first is an evolutionary path to creating generalized data infrastructure by building on existing impactful successes in the research domain such as the recent Science and Technology Research Infrastructure for Discovery, Experimentation and Sustainability
(STRIDES) initiative from the National Institutes of Health or MIMIC from the MIT Laboratory for Computational Physiology to generate the momentum for change.

Another, more revolutionary path would be for governments to mandate that all healthcare organizations store their clinical data in commercially available clouds. In either scenario, existing initiatives such as the Observational Medical Outcomes Partnership (OMOP) and Fast Healthcare Interoperability Resources (FHIR) standard that create a common data schema for storage and transfer of healthcare data and AI enabled technology innovations to accelerate the migration of existing data will accelerate progress and ensure that legacy data are included.

There are several complex problems still to be solved including how to enable informed consent for data sharing, and how to protect confidentiality yet maintain data fidelity. However, the prevalent scenario for data infrastructure development will depend more on the socioeconomic context of the health system in question rather than on technology.

A notable by-product of a move of clinical and research data to the cloud would be the erosion of market power of EMR providers. The status quo with proprietary data formats and local hosting of EMR databases favors incumbents who have strong financial incentives to maintain the status quo. Creation of health data infrastructure opens the door for innovation and competition within the private sector to fulfill the public aim of inter operable health data.

The potential of AI is well described, however in reality health systems are faced with a choice: to significantly downgrade the enthusiasm regarding the potential of AI in everyday clinical practice, or to resolve issues of data ownership and trust and invest in the data infrastructure to realize it.

Now that the growth of cloud computing in the broader economy has bridged the computing gap, the opportunity exists to both transform population health and realize the potential of AI, if governments are willing to foster a productive resolution to issues of ownership of healthcare data through a process that necessarily transcends election cycles and overcomes or co-opts the vested interests that maintain the status quo—a tall order. Without this however, opportunities for AI in healthcare will remain just that—opportunities.


Panch, T., Szolovits, P. & Atun, R. Artificial intelligence, machine learning and health systems. J. Glob. Health 8, 020303 (2018).


Google Scholar

A., Michalowski, M. & Buckeridge, D. Health intelligence: how
artificial intelligence transforms population and personalized health. NPJ Digit Med. 1, 53 (2018).


Google Scholar

Fogel, A. L. & Kvedar, J. C. Artificial intelligence powers digital medicine. NPJ Digit Med. 1, 5 (2018).


Google Scholar

I wonder whether to be worried about this or accept it with confidence. What ever goes into artificial intelligence is provided by human’s. How would this work with medical bills? What about the real Doctor?

I think about the incorporation of AI into automobiles. Again Google is right there amongst other AI automobiles like the Tesela.

The auto pilot system has been around for several years, but it’s functionability is relevant to flying the plane once it is already in the sky and everything is going smoothly. It can not land the plane or deal with complications if there were problems.

Will this be like the health care system of the future?

Thank you for reading,


Comments are welcome

Artificial Sweeteners Dangers

Artificial Sweeteners Dangers

Revealing research on Artificial Sweeteners

What are artificial sweeteners:


A sugar substitute is a food additive that provides a sweet taste like that of sugar while containing significantly less food energy than sugar-based sweeteners, making it a zero-calorie or low-calorie sweetener. Wikipedia

Three artificial sweeteners: Equal (aspartame), Sweet’N Low (saccharin), and Splenda (sucralose)

  • Aspartame: 200 times sweeter than table sugar. Aspartame is known under the brand names NutraSweet, Equal or Sugar Twin.
  • Acesulfame potassium: 200 times sweeter than table sugar. Acesulfame potassium is suited for cooking and baking and known under brand names Sunnet or Sweet One.
  • Advantame: 20,000 times sweeter than table sugar, suited for cooking and baking.
  • Aspartame-acesulfame salt: 350 times sweeter than table sugar, and known under the brand name Twinsweet.
  • Cyclamate: 50 times sweeter than table sugar. Cyclamate is suited for cooking and baking. However, it’s been banned in the US since 1970.
  • Neotame: 13,000 times sweeter than table sugar. Neotame is suited for cooking and baking and known under the brand name Newtame.
  • Neohesperidin: 340 times sweeter than table sugar. It is suited for cooking, baking and mixing with acidic foods. It is not approved for use in the US.
  • Saccharin: 700 times sweeter than table sugar. It’s known under the brand names Sweet’N Low, Sweet Twin or Necta Sweet.
  • Sucralose: 600 times sweeter table sugar. Sucralose is suited for cooking, baking and mixing with acidic foods. It’s known under the brand name Splenda.



Is made from two amino acids, while sucralose is a modified form of sugar with added chlorine. … Diet Coke still uses aspartame

Widely reported studies have shown a correlation between cancer and aspartame consumption in rats — but not in humans. Diet Pepsi last year, PepsiCo said it stood behind the safety of aspartame.

Under the trade names Equal, NutraSweet, and Canderel, aspartame is an ingredient in approximately 6,000 consumer foods and beverages sold worldwide, including (but not limited to) diet sodas and other soft drinks, instant breakfasts, breath mints, cereals, sugar-free chewing gum, cocoa mixes, frozen desserts, gelatin desserts, juices, laxatives, chew able vitamin supplements, milk drinks, pharmaceutical drugs and supplements, shake mixes, tabletop sweeteners, teas, instant coffees, topping mixes, wine coolers and yogurt. It is provided as a table condiment in some countries. Aspartame is not recommended in for use in baking as it breaks down and loses much of it’s sweetness

Aspartic Acid (40 percent of Aspartame) Dr. Russell L. Blaylock, a professor of neurosurgery at the Medical University of Mississippi, recently published a book thoroughly detailing the damage that is caused by the ingestion of excessive aspartic acid from aspartame. Remember this is 40% of what you drink. Blaylock makes use of almost 500 scientific references to show how excess free excitatory amino acids such as aspartic acid and glutamic acid (about 99 percent of mono sodium glutamate or MSG is glutamic acid) in our food supply are causing serious chronic neurological disorders and a myriad of other acute symptoms.

Too much aspartate or glutamate in the brain kills certain neurons by allowing the influx of too much calcium into the cells. This influx triggers excessive amounts of free radicals, which kill the cells. The neural cell damage that can be caused by excessive aspartate and glutamate is why they are referred to as “excitotoxins.” They “excite” or stimulate the neural cells to death.

According to WebMD, “Some people have reported that aspartame gives them headaches or dizziness or affects their moods, but studies haven’t linked those symptoms to aspartame. If you have phenylketonuria (PKU), a rare metabolic disorder, avoid aspartame, because it contains phenylalanine. Any product containing aspartame has a warning label about that.”

For some who have health problems that are listed in the warning for aspartame usage, aspartame may not be the right choice for you. As a result, the Mayo Clinic advises making sure that you always consult with your doctor first, before ingesting anything that may have an impact on your health. Did I read that right? That is so common sense, why is it necessary to list it when it comes to aspartame? Artificial sweetness for thought.

Aspartic acid from aspartame has the same deleterious effects on the body as glutamic acid isolated from it’s naturally protein-bound state, causing it to become a neurotoxin instead of a non-essential amino acid.

Aspartame in diet sodas, or aspartame in other liquid form are absorbed more quickly and have been shown to spike plasma levels of aspartic acid. 

Aspartic Acid

The Alleged Side Effects of D-Aspartic Acid

Many D-aspartic acid side effect rumors originated in forums and product reviews. While many users said they didn’t have any problems while using D-aspartic acid, some users claimed the following side effects:

  • Acne
  • Headache
  • Diarrhea
  • Mood Swings
  • DepressionPregnancy and breast-feeding: Not enough is known about the use of aspartates during pregnancy and breast-feeding. Stay on the safe side and avoid use.

Glutamic acid:

Is an α-amino acid that is used by almost all living beings in the biosynthesis of proteins. It is non-essential in humans, meaning the body can synthesize it.

Pepsi to Drop Aspartame From Diet Pepsi.

April 24, 2015 7:41 p.m. ET

PepsiCo Inc. said Friday it would remove the sweetener aspartame from Diet Pepsi, seeking to address consumer concerns about the artificial additive and reverse slumping diet soda sales. Instead, PepsiCo Inc. is going to introduce, “Sucralose”, another controversial artificial sweetener.


Accidentally discovered by U.K. scientists while they were developing new insecticides, remains the biggest sugar substitute on the market, according to retail tracking service Infoscan Reviews and Information Resources, Inc. Aspartame is made from two amino acids, while sucralose is a modified form of sugar with added chlorine. One 2013 study, found that sucralose may alter glucose and insulin levels and may not be a “biologically an inert, (not chemically reactive) compound.”

“Sucralose is almost certainly safer than aspartame,” says Michael F. Jacobson, executive director at the Center for Science in the Public Interest, a Washington, D.C. Diet Pepsi will still contain another FDA-approved artificial sweetener — acesulfame-potassium, or ace-K — which some researchers have said needs further testing and research.

What is Saccharin?

Saccharin is a non-nutritive sweetener that is used in products in many countries. It has not been allowed in Canada as a food additive since the 1970’s.



UPDATED MAY 12, 2018,

After years of discussion, Health Canada quietly decided last month to permit the sweetener in gum, pop and other non-alcoholic beverages, frozen desserts, alcoholic liqueurs, fruit spreads and other products. The news comes amid growing concern over the serious health risks of consuming too much sugar, including heart disease, stroke and premature death.

Personal Note:

Is the controversy worth the risk. I am searching all over the place to find out something conclusive that there are artificial sweeteners out there that are risky to your health. Once these substitutes were said to cause cancer, neurological disorders but it is 2019 and I am finding it irritating to see the huge big question mark.

Why is this so? I was under the impression we had advanced technologically far enough making a conclusive statement as to whether these artificial sweeteners cause a health hazard. There is the notation that anything in moderation is OK. A lot of these soft drinks are addictive. I know I was addicted to Coca a Cola for many years and when I decided to kick the habit I suffered from the shakes, jitters and a few other uncomfortable side effects.

What’s Next?

Saccharin, like aspartame, sucralose and acesulfame potassium, is an artificial sweetener. It’s been around since the 1800’s and became a popular alternative to sugar throughout the 20th century because it was less expensive and had no calories.

But the perception of saccharin changed after the publication of research suggesting saccharin could cause cancer – specifically, studies found it increased the incidence of bladder cancer in rats. Major restrictions on the sale and use of saccharin followed.

Since then, more studies have been done that question the link between saccharin and cancer. The studies have shown that the mechanism that causes bladder cancer in rats isn’t applicable to humans.

Not everyone is happy to see saccharin back in Canada. Lisa Lefferts, a senior scientist with the Washington-based Center for Science in the Public Interest, argues that anything that may cause cancer in lab animals shouldn’t be deemed safe for humans.

The advocacy organization has, for many years, spoken out on the potential risks of artificial sweeteners and urges consumers to avoid saccharin. The mechanism that causes bladder cancer in rats may not apply to humans, but that doesn’t mean saccharin doesn’t pose a risk.

Artificial sweeteners are attractive because they let us eat and drink sweet treats without the caloric guilt. For people with diabetes, they also allow the freedom to eat foods that would otherwise be off-limits.

But, like many things in life, the idea that most of us can have our sugar-free cake and eat it too is simply too good to be true. Studies show that people who consume foods and beverages sweetened artificially are actually driven to eat more, which could be because the sweeteners don’t make the body feel full, or because the sweetness promotes the desire to eat.


Stevia is known to be a natural, non-calorie sweetener, made from a plant indigenous to South America. It has been around for centuries and now makes its appearance in sodas and many sports drinks. This substitute is also available in table-top packets, liquid drops, dis solvable tablets, as well as baking blends.

Breaking down Stevia

Older animal studies show that high doses of Stevie may be toxic to the kidneys and reproductive system, and could even mutate genes. That’s why the FDA doesn’t allow unrefined or whole-leaf Stevie in foods, despite the fact that South Americans have consumed the plant for centuries. But newer data on stevioside and rebaudioside A—the purified extracts—does not show evidence of toxicity (though it’s worth noting that some of this research was funded by companies like Coca-Cola and Cargill, the maker of Truvia).

In 2008, the FDA awarded its first Generally Recognized as Safe (GRAS) status to these extracts, which have been approved for use and sold in Europe, Canada, France, New Zealand, and Japan, where it’s been on the market for decades without any major safety issues. The WHO and UN’s Joint Expert Committee on Food Additives have also ruled them to be safe in moderation.”

Anything that is artificial is exactly what. It is not the real thing and has been chemically modified for profit. Sorry once again that is my opinion.

Natural alternatives for the use of aspartame include, honey, maple syrup, agave nectar, fruit juice, and molasses. In short, individuals who experience adverse side effects should play it safe and avoid Aspartame, but most studies seem to conclude that it’s safe for those who are able to enjoy it in moderation.

CTV News
August 29 at 1:31 PM ·
An Ontario family was devastated when their beloved Great Dane Gracie died after eating four packages of chewing gum—and are warning others to be careful with products containing an artificial sweetener that may have played a role in the dog’s death

Published Wednesday, August 28, 2019 8:15PM EDT
Last Updated Thursday, August 29, 2019 5:47PM EDT

An Ontario family was devastated when their beloved Great Dane Gracie died after eating four packages of chewing gum—and are warning others to be careful with products containing an artificial sweetener that may have played a role in the dog’s death

Jennifer Watt, from Schomberg, told CTV News Toronto that Gracie was a wonderful dog for her three children.

“She was a great family pet. She would play with her brother and let the kids climb all over her” said Watt.

Watt says she had purchased 12 packs of chewing gum and had wrapped as part of a gift. Gracie found the present hidden under a bed, ripped it open and then ate four packages of Pur chewing gum, which is sweetened with xylitol, an artificial sweetener.

“Within a few minutes she was able to get from upstairs to downstairs and then she collapsed and went unconscious.”

Gracie weighed about 150 pounds and Watt says her husband scooped her up in his arms and took her to the local veterinary clinic, which tried to stabilize her.

The chewing gum contained xylitol, a sugar substitute that is toxic to dogs and cats. The family later took Gracie to an animal hospital and despite more than $7,000 worth of medical tests and treatment; she died of liver failure 48 hours later.

“So a couple of packs of gum cost us with the fees of the veterinarian and replacing the dog close to $10,000” said Watt.

This isn’t the first time a beloved pet became ill after consuming a product with the artificial sweetener.

CTV News Toronto spoke with Arnie Charlton, of Windsor, last month when his cockapoo Lexi almost died. The dog quickly snatched a single piece of gum that was dropped on the floor by his granddaughter. It also contained the artificial sweetener xylitol.

The dog became sick and also had to be rushed for veterinary care.

Xylitol is also being used as sweetener in baked goods, toothpaste and peanut butter.

“Something like peanut butter seems like an innocent way to give medication, but if it’s a peanut butter that contains this artificial sweetener that could be fatal, so read labels before you give your pet any substance,” said Melanie Couleter with the Windsor Essex County Humane Society.

Watt has two other Great Danes now. Her dog Gracie died a few years ago, but she says her family never went public with what happened.

Watt says after seeing more of these artificial sweeteners on the market containing xylitol she decided to contact CTV News Toronto to tell her story.

“I’m just trying to prevent other families having to go through what we went through,” Watt said.

So what is this  Artificial Sweetener Xylitol?

Xylitol is a naturally occurring alcohol found in most plant material, including many fruits and vegetables. It is extracted from birch wood to make medicine.

Xylitol is widely used as a sugar substitute and in “sugar-free” chewing gums, mints, and other candies.

Dog owners should know that xylitol can be toxic to dogs, even when the relatively small amounts from candies are eaten. If your dog eats a product that contains xylitol, it is important to take the dog to a veterinarian immediately.

It is dangerous to dogs. Personally if this sweetener was the contributor to the death of a 150 pound Great Dane I would be really afraid of having it anywhere near my children.

Please, please be careful when it comes to artificial sweeteners.

Artificial Sweetners

Artificial Sweeteners Dangers


Some of the many side effects when

using artificial sweeteners 


The Dangers of Artificial Sweeteners Images and Information: Please click on links below.

Dangers of artificial sweeteners confirmed

CNN Report on Artificial Sweeteners

Videos, images, and further information

Please stay healthy. Go the natural way for the best benefits. I have dedicated my site to alternatives and healthy lifestyles.

Thank you for visiting,

Your comments are welcome,