LANCASTER, Pa. (AP) — In a case that underscores the intersection of technology and child safety, two teenage boys were placed on probation Wednesday after admitting to creating 59 child sex abuse images using artificial intelligence. The boys, both 14 at the time, were found to have morphed images of their classmates, primarily taken from Instagram, altering them to appear nude.

Victims included fellow students from Lancaster Country Day School, located just west of Philadelphia. Reports revealed that a concerned parent alerted authorities after her daughter mentioned a student was using AI technology to manipulate photographs of girls into explicit images.

During the hearing, which took place before Judge Leonard Brown III, the court addressed the serious implications of the boys' actions. They were sentenced to 60 hours of community service each and mandated to avoid contact with their victims. The judge noted a lack of remorse from the boys, suggesting that if they were adults, they could face harsher sentencing.

Heidi Freese, an attorney for one of the defendants, characterized the proceedings as 'regrettable' and hinted at complex legal issues that could arise in related cases in the future.

Pennsylvania Attorney General Dave Sunday highlighted the 'dark side' of technology and social media, stating the case exemplifies how easily it can be weaponized to harm children. He emphasized the devastation felt by the victims as a result of the boys' actions.

This scandal closely follows a lawsuit in Tennessee where three teenagers sued Elon Musk's xAI, claiming the company's tools had morphed their photos into sexualized images without consent. These mounting issues emphasize the urgent need for stricter regulations surrounding deepfakes and image manipulation technologies.

The resolutions of such cases have already sparked discussions regarding school accountability and victim support, with legal actions expected against the educational institutions involved. Victims’ rights advocates continue to advocate for more stringent laws to protect minors from digital exploitation.

As legislation evolves, many states are moving towards enacting laws that target deepfake technology to ensure the safety and well-being of children in an increasingly digital world.