Nanomaterials’ properties, influenced by size, shape, and surface characteristics, are crucial for their technological, biological, and environmental applications. Accurate quantification of these materials is essential for advancing research. Deep learning segmentation networks offer precise, automated analysis, but their effectiveness depends on representative annotated datasets, which are difficult to obtain due to the high cost and manual effort required for imaging and annotation. To address this, we present DiffRenderGAN, a generative model that produces annotated synthetic data by integrating a differentiable renderer into a Generative Adversarial Network (GAN) framework. DiffRenderGAN optimizes rendering parameters to produce realistic, annotated images from non-annotated real microscopy images, reducing manual effort and improving segmentation performance compared to existing methods. Tested on ion and electron microscopy datasets, including titanium dioxide (TiO2), silicon dioxide (SiO2), and silver nanowires (AgNW), DiffRenderGAN bridges the gap between synthetic and real data, advancing the quantification and understanding of complex nanomaterial systems.