close
close

Three Different Perspectives on AI Provide Food for Thought at NJ-GMIS 2024 TEC – NJ Tech Weekly

On April 18, 2024, the NJ-GMIS Technology Education Conference (TEC) welcomed 150 IT professionals from Garden State municipal, county, and school districts to the Palace at Somerset Park to learn how technology can make their jobs easier. The conference was sponsored by the New Jersey Government Management Information Sciences (NJ-GMIS).

The keynote speaker at the conference was Giani Cano, an expert in mentalism, who talked about unlocking the power of focus to improve decision-making. Attendees also had the opportunity to talk to vendors who were showcasing their software at the show.

NJTechWeekly.com participated in a session titled “The Impact of Artificial Intelligence on Local Government and Educational Institutions,” where three speakers discussed their experiences with AI in the workplace and provided advice on how NJ-GMIS members could incorporate AI into their processes.

Bernadette Kucharczuk calls for a cautious approach

Bernadette Kucharczuk warns of the pitfalls of artificial intelligence. | Esther Surden

A cautious stance on the discussion came from Bernadette Kucharczuk, a certified government CIO who recently retired from serving as Jersey City’s chief information technology officer.

Kucharczuk believes there are insufficient safeguards against AI exploiting “your digital breadcrumbs,” the information you freely share online. “AI is already making decisions about you. It’s car insurance, right? FICO scores. All of those (decisions) are made based on your digital breadcrumbs, the things you do. Patterns of human behavior are driving AI search results.”

While using AI to screen job applications may seem like a great idea because the hard work is being done by something that is completely unbiased, “your system is screening out people who are likely to be depressed in the future. They’re not depressed now. They just might be in the future,” she said. It’s screening out “women who are likely to get pregnant in the next year or two. What if they hire people who are aggressive because that’s the culture in your workplace?”

She cited several colossal AI fails, including the use of algorithms to determine prison sentences and the alleged way Facebook’s algorithm missed news of the events in Ferguson, Missouri, “after a black teenager was killed by a white police officer under unclear circumstances.”

While AI advocates say these bugs will be addressed and fixed, she noted that it’s been two years since some of these things happened and nothing has been done. Her plea: Don’t rely on AI until these issues are fixed.

We can’t avoid these difficult questions,” Kucharczuk continued. “We can’t outsource our responsibility to machines. AI doesn’t give us some kind of ‘get out of ethics free pass.’ Data scientist (Fred) Benenson calls this ‘mathwashing.’ We need the opposite. We need to cultivate algorithms, suspicion, scrutiny, and inquiry. We need to make sure we have algorithmic accountability, audit, and meaningful transparency. We need to accept that bringing mathematics and computation into messy, value-laden human affairs doesn’t bring objectivity.”

Kucharczuk continued his presentation by quoting technosociologist Zeynep Tufekci.

Sandra Paul believes that practical applications of AI are essential

Sandra Paul to Speak at NJ-GMIS TEC 2024 | Esther Surden

On the other side of the coin, Sandra Paul, director of information technology for Township of Union Public Schools, spoke about the productivity gains AI can provide. She said, “We have over 7,800 kids and 1,200 staff. But I only have a team of five. We have 9,000 endpoints in the school system, including copiers, printers, security cameras, door entry systems, and phones.” There are 500 preschool children, and each one generates 5,000 data points.

She added that there are a number of big data management challenges that AI could address. She noted that every time a child has a problem or is having a problem, it generates a data point. “Think about it. All of these incidents happen in school, and we have to keep records of them.” In addition, special education records must be kept forever. Employee retirement records also generate a lot of data, as does the email system. “We have a lot of compliance when it comes to federal and local and county disaster recovery planning requirements,” for example. Is it any wonder that school district technology officers are using ChatGPT to write their disaster recovery plans?

The infrastructure at a school “requires monitoring alerts and notifications, application management, security management, server provisioning, server management, and network management.” She noted that security cameras are required by federal and state law to be able to track where you go in the building. It’s not about someone sitting somewhere looking at a screen. AI does all the tracking. Paul added that security cameras aren’t in bathrooms or locker rooms. “We have enough cyberbullying already,” she said.

Another area where AI helps is with family communications. When a problem occurs at a school, there’s a delay between when the incident occurs and when parents are notified. Paul said one of her schools had a chemical spill that set off a fire alarm, and the AI-assisted program she used was able to notify every child and parent on their cellphone. The system also automates compliance checks, based on federal and state guidelines, so Paul can make sure the district is following all applicable regulations.

Marc Pfeiffer warns of policy changes needed as AI is adopted

The final speaker was Marc Pfeiffer, senior fellow and faculty researcher at the Center for Urban Policy Research, Bloustein School of Planning and Public Policy, Rutgers–New Brunswick, who also had a 37-year career in local government in New Jersey. Pfeiffer spoke about the policy implications of adopting AI tools and infrastructure.

“AI is going to pull our IT administrators who are focused on management more and more into the world of public policy,” Pfeiffer told the group. “It’s an area where you may not have had any training or education, but it’s an area that you’re going to have to learn about.

“I want you to leave here knowing that this is something that is part of your world. Now. Management will find that technology governance and public policy will overlap,” he said.

It’s not just chatbots anymore. It’s machine learning and natural language processing. It’s neural networks. All of these technologies are coming together. AI includes robotic processes, business process automation, and computer vision, which is the ability to use cameras to look at things, translate what you’ve learned, and act on that.

“So when they see a robbery in a parking lot, you don’t need a police officer who’s watching maybe 300 cameras in this city to see what’s going on. They’re going to get an alert because the AI ​​has been trained to use that type of human behavior to send an alert, so they can send a report quickly.”

“The latest is ‘digital twins,’ which take environments like traffic systems, water treatment systems, wastewater treatment systems, and see all of their components digitally. That way you can model what happens when something goes wrong.” AI will be integrated into most digital goods and services within three to five years, if not sooner, he predicted.

“When we talk about public policy, we mean things that set the strategic direction, operate within the legal framework and priorities of government,” he said.

One example is the idea that you don’t have personal privacy when you use your employer’s or school system’s devices. That’s a political decision. “But there are also decisions about how to administer them, and we need to figure out how to administer them better.”

Take the example of surveillance cameras in a public park or other public places. “Some of the public policy questions are: How are they being monitored? By AI or by humans? How are incidents being flagged and responded to? Any police department that just looks at their facial identification is in trouble. Arresting someone based on that is wrong. And you’re going to have to do a human identification on top of that, because AI itself makes mistakes. It’s going to get better, but you have to do something to bring human judgment in there.”

There are other implications, Pfeiffer said. For example, under what circumstances is a public record released? And to whom? Think of requests filed under New Jersey statute Act on the transparency of public documentation and exceptions to that policy. When the videos are shared, the people who receive them can post them online. “We need new public policy because we haven’t thought about the issues of values, personal safety, crime prevention and the implications for public safety. You need to address those policy issues over the next few years as AI becomes more and more pervasive,” he said.

Pfeiffer’s talk generated many questions from the audience as members pondered all the changes that artificial intelligence will bring to computing practices in schools and cities in the future.

Sharing is caring!