From the Howard Newsroom:
WASHINGTON – The Howard University Department of Electrical Engineering and Computer Science recently received a two-year, $300,000 grant from the National Science Foundation through the Secure and Trustworthy Cyberspace (SaTC) program. The grant will support integrating artificial intelligence and cybersecurity research and education.
"As more and more systems are relying on automated operations using computers and machine intelligence to do different tasks, such as filtering out applicants in the hiring process and automated crime data analysis, bias shows up,” says Danda B. Rawat, Ph.D., the project's principal investigator and director of the Howard University’s Data Science & Cybersecurity Center (DSC2).
Rawat added, "It is important to design and test machine learning (ML) algorithms and AI systems that produce reliable, robust, trustworthy and fair/unbiased outcomes to make them acceptable by diverse communities."
The Howard research project will help to train the next-generation STEM workforce with knowledge of integrated cybersecurity and AI that will help meet evolving demands of the U.S. government and industries as well as improve the nation's economic security and preparedness.
The project focuses on both AI for cybersecurity and cybersecurity for AI. Lately, machine learning algorithms (ML) and AI systems have been shown to be able to create machine cognition comparable to or even better than human cognition for some applications. Machine learning algorithms are now regarded as useful as possible cybersecurity solutions. However, because ML algorithms and AI systems can be controlled, dodged, biased, and misled through flawed learning models and input data, they need robust security features and trustworthy AI.