Datao Tang1 Xiangyong Cao1 Xuan Wu1 Jialin Li1 Jing Yao3 Xueru Bai2 Deyu Meng1
1Xi’an Jiaotong University 2Xidian University
3Chinese Academy of Sciences
Corresponding author.
Abstract
Remote sensing image object detection (RSIOD) aims to identify and locate specific objects within satellite or aerial imagery. However, there is a scarcity of labeled data in current RSIOD datasets, which significantly limits the performance of current detection algorithms. Although existing techniques, e.g., data augmentation and semi-supervised learning, can mitigate this scarcity issue to some extent, they are heavily dependent on high-quality labeled data and perform worse in rare object classes. To address this issue, this paper proposes a layout-controllable diffusion generative model (i.e. AeroGen) tailored for RSIOD. To our knowledge, AeroGen is the first model to simultaneously support horizontal and rotated bounding box condition generation, thus enabling the generation of high-quality synthetic images that meet specific layout and object category requirements. Additionally, we propose an end-to-end data augmentation framework that integrates a diversity-conditioned generator and a filtering mechanism to enhance both the diversity and quality of generated data. Experimental results demonstrate that the synthetic data produced by our method are of high quality and diversity. Furthermore, the synthetic RSIOD data can significantly improve the detection performance of existing RSIOD models, i.e., the mAP metrics on DIOR, DIOR-R, and HRSC datasets are improved by 3.7%, 4.3%, and 2.43%, respectively. The code is available at https://github.com/Sonettoo/AeroGen.
1 Introduction
Object detection is a key technology for understanding and analyzing remote sensing images. It enables efficient processing of large-scale satellite data to extract and identify critical information, such as land cover changes [40], urban development status [15], and the impacts of natural disasters [43]. Through object detection, researchers can automatically extract terrestrial targets from complex remote sensing images, including buildings, vehicles, roads, bridges, farmlands, and forests. This information can be further applied in environmental monitoring, urban planning, land use analysis, and disaster emergency management.
With the rapid development of deep learning, supervised learning-based object detection algorithms have made significant progress in remote sensing image analysis [47]. Although these algorithms can accurately locate and classify multiple objects in remote sensing images, they are heavily dependent on a large number of labelled training data. However, obtaining sufficient annotated data for remote sensing images is particularly challenging. Due to the presence of numerous and complex targets in remote sensing images, the manual annotation process is not only time-consuming and labour-intensive but also requires annotators to possess specialized knowledge, thus leading to high costs.
Although traditional data augmentation methods [3] (e.g., rotation and scaling) and enhancement techniques suitable for object detection (e.g., image mirror [14], object-centric cropping [20], and copy-paste [6]) can increase data diversity to some extent, they do not address the fundamental issue of insufficient data. The emergence of generative models [24, 11] provides a new solution to this problem. Currently, in the field of natural images, numerous high-performance generative models [30, 26] have been developed, capable of generating high-quality images from text conditions and also achieving significant progress in layout control. For remote sensing images, the application of generative models is usually combined with specific tasks, such as change detection [44], semantic segmentation [33] and road extraction [32]. These studies have been highly successful in utilizing data obtained from generative models to augment real-world datasets, thereby enhancing the performance of target models in downstream tasks. Therefore, utilizing generative diffusion models to fit the distribution of existing datasets and generate new samples to enhance the diversity and richness of remote sensing datasets is a feasible solution.
In this paper, we focus on the remote sensing image object detection (RSIOD) task and construct a layout generation model (i.e., AeroGen) specifically designed for this task. The proposed AeroGen model allows for the specification of layout prior conditions with horizontal and rotated bounding boxes, enabling the generation of high-quality remote sensing images that meet specified conditions, thus filling a gap in the research field of RSIOD. Based on the AeroGen model, we further propose a conditional generation-based end-to-end data augmentation framework. Unlike pipeline-style data augmentation schemes in the natural image domain[41], our proposed pipeline is implemented by directly synthesizing RSIOD data through conditional generative models, thus eliminating the need for additional instance-pasting procedures. By introducing a diversity-conditioned generator and generation quality evaluation, we further enhance the diversity and quality of the generated images, thereby achieving end-to-end data augmentation for downstream object detection tasks. Moreover, we also design a novel filtering mechanism in this data augmentation pipeline to select high-quality synthetic training images, thus further boosting the performance.
In summary, the contributions of our work are threefold:
- •
We propose a layout-controllable diffusion model (i.e., AeroGen) specifically designed for remote sensing images. This model can generate high-quality RSIOD training datasets that conform to specified categories and spatial positions. To our knowledge, AeroGen is the first generative model to support layout conditional control for both horizontal and rotated bounding boxes.
- •
We design a novel end-to-end data augmentation framework that integrates the proposed AeroGen generative model with a layout condition generator as well as an image filter. This framework can produce synthetic RSIOD training datasets with high diversity and quality.
- •
Experimental results show that the synthetic data can improve the performance of current RSIOD models, with improvements in mAP metrics by 3.7%, 4.3%, and 2.43% on the DIOR, DIOR-R, and HRSC datasets, respectively. Notably, the performance in some rare object classes also significantly improves, e.g., achieving improvements of 17.8%, 14.7%, and 12.6% in the GF, DAM, and APO categories, respectively.
2 Related Work
2.1 Diffusion Models
Diffusion models [11, 24, 30], known for their stable training process and excellent generative capabilities, are gradually replacing Generative Adversarial Networks (GANs) [29, 39] as the dominant model in generative tasks. Text-guided diffusion models can produce realistic images, but the concise nature of text descriptions often makes it challenging to provide precise guidance for image generation, thereby limiting personalized generation capabilities. To address this issue, more researchers have introduced additional control conditions beyond text guidance, significantly expanding the application scope of diffusion models. These applications include layout guidance [16, 45, 35], style transfer [36, 5], image denoising and super-resolution [4], and video generation [12], showcasing the enormous potential of diffusion models in various complex tasks. Among these models, LDM [30] restricts the diffusion process to a low-dimensional latent space, which not only preserves the high quality of the generated images but also significantly reduces computational complexity, serving as the foundation for numerous generative studies.
2.2 Task-Oriented Data Generation
The use of generative models to synthesize training data to assist in tasks like object detection [46, 17], semantic segmentation [33] and instance segmentation [41] has garnered significant attention from researchers. Generative models not only produce artistic natural images but also quickly adapt to specific industry scenarios such as remote sensing, medical, and industrial fields through techniques like fine-tuning. For instance, A Graikos et al. [8] proposed representation-guided models that can generate embeddings rich in semantic and visual information through self-supervised learning (SSL), which guides the diffusion model to generate images. This approach reduces the difficulty of obtaining high-precision annotated data in specialized fields like histopathology and satellite imagery. In the area of remote sensing image generation, SatSynth [33] uses diffusion models to jointly learn the distribution of remote sensing images and their corresponding semantic segmentation labels. By generating semantically informed remote sensing images through joint sampling, it improves the performance of downstream segmentation tasks. Li Pang et al. [25] proposed a two-stage hyperspectral image (HSI) super-resolution framework that generates large amounts of realistic hyperspectral data for tasks like denoising and super-resolution. Moreover, models, e.g. CRS-Diff [32] and DiffusionSat [13], designed for optical remote sensing image generation, handle multiple types of conditional inputs, applying synthetic data to specific tasks such as road extraction. However, no existing research has specifically explored image generation methods for remote sensing image object detection (RSIOD) tasks. To fill this gap, we first propose a layout-controllable generative model that supports both rotated and horizontal bounding boxes, capable of synthesizing high-precision remote sensing images.
2.3 Generative Data Augmentation
To effectively apply synthetic data to downstream tasks, most existing methods directly combine synthetic data with real data for training. However, some studies (e.g., Auto Cherry-Picker [1]) have improved data quality by filtering synthetic data, thus better enhancing the performance of downstream tasks. For example, X-Paste [41] proposed a pipeline method that uses a copy-paste strategy to synthesize images, combined with a CLIP-based filtering mechanism, to further improve instance segmentation performance. A more comprehensive review of this issue can be found in DriverGen [7], which analyzes the application of synthetic data from the perspective of data distribution. It combines a copy-paste strategy to construct a multi-layer pipeline that enhances diversity and achieves significant results on the Lvis dataset [9].
The most closely related approach to our work is ODGEN [46], which employs an object-wise generation strategy to produce consistent data for multi-object scenes, addressing the domain gap and concept bleeding issues in image generation. In contrast, our work focuses on object detection in remote sensing images, utilizing a conditional generative model to directly synthesize data, thereby avoiding the additional instance-pasting process. Furthermore, we introduce a novel diversity-conditioned generator, combined with a filtering mechanism that accounts for both diversity and generation quality, to further enhance the diversity and quality of generated images. Through this approach, we achieve end-to-end data augmentation, significantly improving the performance of downstream tasks.
3 AeroGen
In this section, we introduce AeroGen, a layout-conditional diffusion model for enhancing remote sensing image data. The model consists of two key components: (a) a remote sensing image layout generation model in Sec.3.1 that allows users to generate high-quality RS images based on predefined layout conditions, such as horizontal and rotated boxes; (b) a generation pipeline in Sec.3.2 that combines a diffusion-model-based diversity-conditional generator, which produces diverse layouts aligned with physical conditions, with a data-filtering mechanism to balance the diversity and quality of synthetic data, improving the utility of the generated dataset.
3.1 Layout-conditional Diffusion Model
The model weights, obtained through comprehensive fine-tuning on a remote sensing dataset based on LDM [30, 32], are adopted for RS study. In the original text-to-image diffusion model, the conditional position information is combined with the text control condition, and layout-based remote sensing image generation is achieved by establishing a unified position information encoding along with a corresponding dual cross-attention network, as shown in Fig.1. Building on the latest research advances, combined with the regional layout mask-attention strategy, control accuracy is improved, particularly for small target regions.
Layout Embedding. As shown in Fig.1(a), each object’s bounding box or rotational bounding box is uniformly represented as a list of eight coordinates, i.e., , ensuring a consistent representation between horizontal and rotated bounding boxes. Building on this, Fourier [23] encoding is employed to convert these positional coordinates into a frequency domain vector representation, similar to GLIGEN [16]. We use a frozen CLIP text encoder [27] to obtain fixed codes for different categories, which serve as layout condition inputs. The Fourier-encoded coordinates are then fused with the category encodings using an additional linear layer to produce the layout control input:
(1) |
where denotes the concatenation of Fourier-coded coordinates and category codes, and represents the linear transformation layer. In this manner, spatial location and category information are effectively combined as layout control tokens.
Layout Mask Attention.In addition to traditional token-based control, recent studies indicate that direct semantic embedding based on feature maps is also an effective method for layout guidance. In the denoising process of a diffusion model, the injection of conditional information is gradual, enabling local attribute editing at the noise level. To this end, conditionally encoded noise region steering is employed and combined with a cropping step for improved layout precision. As shown in Fig.1(b), each bounding box is first transformed into a 0/1 mask , and category attributes are obtained through CLIP encoding. During each denoising step, the mask attention network provides additional layout guidance. The process is expressed as follows: for each denoised image and category encoding , the mask is used for attention computation according to the following equation:
where represents the corresponding bounding box mask, and derived from as the attention mask. This method enables precise manipulation of local noise characteristics during the diffusion generation process, offering finer control over the image layout.
AeroGen Architecture.In AeroGen, the text prompt serves as a global condition and is integrated with layout control tokens via a dual cross-attention mechanism. The output is computed as:
(2) |
where represents the cross-attention mechanism. and are the keys and values of the global text condition, while and are the layout control tokens. balances the influence of global and layout conditions.
The overall loss function for AeroGen combines both the global text condition and layout control, defined as:
(3) |
where represents the noisy image at time step , is the global text condition, and is the layout control.
3.2 Generative Pipeline
The layout generative pipeline, as illustrated in Fig.2, is divided into five stages: label generation, label filter, image generation, image filter, and data augmentation. Each generation step is followed by a corresponding screening step to ensure synthesis quality.
Label Generation.Inspired by recent cutting-edge research [33], we adopt a denoising diffusion probabilistic model (DDPM [11]) to learn the conditional distribution and directly sample from it to obtain layout labels, thereby avoiding conflicts in layout conditions that may arise from random synthesis approaches. The specific method is illustrated in Fig.2, where a labelling matrix is first constructed. This matrix contains all categories of conditions with dimensions , where and represent the height and width of the images, respectively, and denotes the number of condition categories. For each condition corresponding to the target frame of the image, the value within the target frame region is set to 1, while the values in the remaining regions are set to -1. This process is formally represented as:
(4) |
where , , and , with denoting the target area for the -th category. Next, this conditional distribution is fitted using a DDPM-based generator . The loss function is based on the mean square error (MSE):
(5) |
where represents the original layout matrix, represents the noise matrix at the -th time step, and denotes the model’s predicted noise at step .
Label Filter and Enhancement.The label data sampled from the generator may not always align with real-world intuition or effectively guide image generation. Therefore, we propose a normal distribution-based filtering mechanism to screen the generated bounding box information, ensuring that the data conform to the distribution characteristics of real labels. The label filter assumes that the attributes of the bounding boxes (e.g., area) follow a normal distribution and introduces the following constraint: , where determines the filter’s strictness, thereby ensuring that generated bounding boxes fall within a realistic and feasible range. Synthetic pseudo-labels and genuine a priori labels are filtered to form a comprehensive layout condition pool through additional enhancement strategies, including scaling, panning, rotating, and flipping.
Image Generation.The synthetic bounding box labels are obtained based on the pool of layout conditions. The corresponding synthetic images are generated using the layout-guided diffusion model through the image generation process described in Sec.3.1. The model uses these bounding box labels to guide the generation, ensuring that the image content matches the generated layout conditions.
Image Filter.Since the images generated by the diffusion model do not consistently meet high-quality or predefined layout requirements, a screening mechanism is implemented to evaluate both the quality of the generation and the consistency of the layout. The consistency of the semantic and layout is evaluated using the CLIP model [19] and a ResNet101-based classifier [10]. Synthetic images are then filtered by calculating their CLIP scores and minimum classification accuracies, which are compared against predefined thresholds to select the final filtered images.
Data Augmentation.The synthetic data serves as a complementary dataset alongside the real dataset, and both are utilized as training data for downstream target detection model training.
4 Experiments
In this section, we conducted extensive experiments to verify the generative capabilities of AeroGen and its auxiliary data augmentation ability to support downstream RSIOD tasks. Specifically, we assessed the performance of our layout generation model AeroGen from both quantitative and qualitative perspectives. Subsequently, we performed data augmentation experiments on three datasets (i.e., DIOR, DIOR-R, and HRSC) to verify the effectiveness of synthetic data generated by our AeroGen model in improving the performance of downstream object detection tasks.
4.1 Implementation Details
Dataset | Modality | Images | Objects | Categories |
---|---|---|---|---|
DIOR [15] | HBB | 23,463 | 192,518 | 20 |
DIOR-R [2] | OBB | 23,463 | 192,518 | 20 |
HRSC [21] | OBB | 1,061 | 2,976 | 19 |
Data Preparation.An overview of the three datasets is provided in Tab.1. Notably, the DIOR and DIOR-R datasets [2] share the same image data but differ in annotation format, with DIOR using bounding boxes and DIOR-R using rotated bounding boxes. HRSC [21] is a Remote Sensing dataset for ship detection, with image sizes ranging from 300 × 300 to 1500 × 900 pixels. It is divided into 436, 181, and 444 frames for training, evaluation, and testing, respectively. The DIOR/DIOR-R dataset is split into training, validation, and testing sets in a 1:1:2 ratio, with generative model training conducted exclusively on the training set.
Method Dataset Modality FID CAS YOLO Score LostGAN [31] DIOR [15] HBB 57.10 46.02 14.3/27.3/15.2 Layout Diffusion [42] 45.31 56.98 20.0/37.4/19.3 ReCo [38] 42.56 55.42 21.1/40.7/23.1 GLIGEN [16] 41.31 63.50 25.8/44.4/27.8 AeroGen (Ours) 38.57 76.84 29.8/54.2/31.6 GLIGEN [16]† DIOR-R [2] OBB 48.43 58.89 24.6/41.6/25.1 AeroGen (Ours) 35.07 74.13 29.6/57.6/32.0 GLIGEN [16]† HRSC [21] OBB 66.69 43.35 23.4/44.7/26.3 AeroGen (Ours) 45.86 51.19 27.1/51.0/27.6
Training Details.We trained our AeroGen separately on each dataset for 100 epochs. During training, we used the AdamW optimizer [22] with a learning rate of 1e-5. Only the attention layers of UNet and the Layout Mask Attention (LMA) are updated, while the remaining weights are inherited from the fine-tuned LDM in RS data [32].
Evaluation Metrics.For the quantitative analysis of generated images, we used the FID score to evaluate the visual quality of the generated images and employed Classification Score (CAS) [28] and YOLO Score [18] to measure the layout consistency of the generated images. In the data augmentation experiments, we assessed object detection model performance based on mAP50 and mAP50-95 (mAP) metrics to evaluate their overall quality.
4.2 Image Quality Results
Quantitative Evaluation. We used a bounding box condition defined by four extreme coordinates and conducted both training and testing on the DIOR dataset. We compared AeroGen with state-of-the-art layout-to-image generation methods, including LostGAN [31], ReCo [38], LayoutDiffusion[42], and GLIGEN [16]. The performance of these methods on three metrics is reported in Tab.2. To ensure fairness, we initialized all methods with identical SD weights and trained them on the DIOR dataset for the same number of epochs. Our method outperformed other methods across all the metrics.
Furthermore, we evaluated AeroGen and GLIGEN on the DIOR-R and HRSC datasets with rotated bounding boxes, where AeroGen consistently excelled. Notably, the original GliGen method does not support rotated bounding box conditions; therefore, we modified the layout encoding (as shown in Fig.1 (a)) and retrained the model.
Qualitative Evaluation. Fig.3 compares the results of AeroGen with those of other methods. AeroGen shows superior layout consistency and an enhanced capability for generating small objects. Besides, we present experimental results on natural images in the supplemental material.
4.3 Data Augmentation Experiments
We synthesize data on three RSIOD datasets for data augmentation. For the DIOR/DIOR-R datasets, we synthesized 10k, 20k, and 50k samples for the RSIOD task. For the HRSC dataset, we synthesise 2k, 4k, and 10k data in the same ratio. The training was performed using the OBB branch experimental setup of the unified YOLOv8 [34] and Oriented R-CNN [37] RSIOD models, and the model performance is verified on the corresponding test sets. The experimental results are shown in Tab.3. The addition of synthetic data significantly improves performance on downstream tasks
Gen Data | mAP | mAP50 |
---|---|---|
0 | 54.22 | 72.69 |
10k | 55.62 | 74.79 |
20k | 56.78 | 76.31 |
50k | 57.92 | 77.10 |
Gen Data | mAP | mAP50 |
---|---|---|
0 | 37.39 | 60.21 |
10k | 39.81 | 62.39 |
20k | 41.12 | 63.33 |
50k | 41.69 | 64.12 |
Gen Data | mAP | mAP50 |
---|---|---|
0 | 63.49 | 90.28 |
2k | 64.12 | 91.79 |
4k | 64.78 | 92.31 |
10k | 65.92 | 93.10 |
We visualize the mAP scores for different categories in detail, as shown in Fig.4. In most categories, results incorporating enhancements significantly outperform those without them, particularly in rarer categories, achieving improvements of 17.8%, 14.7%, and 12.6% in the GF, DAM, and APO categories, respectively.
Strategy mAP mAP50 Flip 37.39 60.21 CopyPaste [6] 38.25 61.79 AeroGen 41.32 63.98 CopyPaste + Flip [6] 38.75 62.11 AeroGen + Flip 41.69 64.12
4.4 Ablation Study
Ablation of Enhanced Methods.We compared synthetic data enhancement methods with traditional approaches, including the basic enhancement techniques of Flip and Copy-Paste [6] for target detection tasks, as shown in Tab.4. The target detection model trained on synthetic data performs significantly better than when trained with traditional methods, demonstrating the generative model’s effectiveness for data enhancement.
Ablation of Different Modules.We assessed the impact of different modules on image quality generated by AeroGen in Tab.5. The contribution of each module to the enhancement of image quality is evaluated by incorporating additional components into the original SD model. Results show that Layout Mask Attention (LMA) effectively captures global semantic information and preserves layout consistency, while adding Dual Cross Attention (DCA) further enhances performance, particularly in YOLO Score, indicating improved regional target generation. Overall, the model performs best when both LMA and DCA are used.
LMA DCA FID CAS YOLO Score ✗ ✗ 82.11 18.48 1.3/3.9/1.1 ✗ ✓ 66.29 40.71 16.5/29.2/17.7 ✓ ✗ 61.50 50.11 25.3/46.5/27.2 ✓ ✓ 38.57 76.84 29.8/54.2/31.6
Synthesis Filter Augment Semantic Layout mAP mAP50 Layout Diversity Image Quality Metrics ✓ ✓ ✓ ✓ ✓ 41.69 64.12 ✗ ✗ ✓ ✓ ✓ 41.31 63.47 ✓ ✗ ✓ ✓ ✓ 40.92 62.41 ✓ ✓ ✗ ✓ ✓ 39.62 61.32 ✓ ✓ ✓ ✗ ✓ 40.27 62.13 ✓ ✓ ✓ ✓ ✗ 37.05 60.03
Ablation of Augment Pipeline.We further analyze the filtering strategies and data augmentation techniques in the generation pipeline, including diverse generation strategies, filtering strategies for layout conditions, and filtering strategies for layout and semantic consistency of images. We use the synthetic data generated in various ways as enhancement data and conduct enhancement experiments on the DIOR-R datasets and the experimental results are shown in Tab.6. As can be seen, each component in the generation pipeline contributes positively.
5 Conclusion
This paper introduces AeroGen, a layout-controllable diffusion model designed to enhance remote sensing image datasets for target detection. The model comprises two primary components: a layout generation model that creates high-quality remote sensing images based on predefined layout conditions, and a data generation pipeline that incorporates a diversity of condition generators for the diffusion model. The pipeline employs a double filtering mechanism to exclude low-quality generation conditions and images, thereby ensuring the semantic and layout consistency of the generated images. By combining synthetic and real images in the training set, AeroGen significantly improves model performance in downstream tasks. This work highlights the potential of generative modeling in enhancing the datasets of remote sensing image processing tasks.
References
- Chen etal. [2024]Yicheng Chen, Xiangtai Li, Yining Li, Yanhong Zeng, Jianzong Wu, Xiangyu Zhao, and Kai Chen.Auto cherry-picker: Learning from high-quality generative data driven by language.arXiv preprint arXiv:2406.20085, 2024.
- Cheng etal. [2022]Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, and Junwei Han.Anchor-free oriented proposal generator for object detection.IEEE Transactions on Geoscience and Remote Sensing, 60:1–11, 2022.
- Chlap etal. [2021]Phillip Chlap, Hang Min, Nym Vandenberg, Jason Dowling, Lois Holloway, and Annette Haworth.A review of medical image data augmentation techniques for deep learning applications.Journal of Medical Imaging and Radiation Oncology, 65(5):545–563, 2021.
- Chung etal. [2022]Hyungjin Chung, EunSun Lee, and JongChul Ye.Mr image denoising and super-resolution using regularized reverse diffusion.IEEE Transactions on Medical Imaging, 42(4):922–934, 2022.
- Chung etal. [2024]Jiwoo Chung, Sangeek Hyun, and Jae-Pil Heo.Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8795–8805, 2024.
- Dwibedi etal. [2017]Debidatta Dwibedi, Ishan Misra, and Martial Hebert.Cut, paste and learn: Surprisingly easy synthesis for instance detection.In Proceedings of the IEEE international conference on computer vision, pages 1301–1310, 2017.
- Fan etal. [2024]Chengxiang Fan, Muzhi Zhu, Hao Chen, Yang Liu, Weijia Wu, Huaqi Zhang, and Chunhua Shen.Divergen: Improving instance segmentation by learning wider data distribution with more diverse generative data.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3986–3995, 2024.
- Graikos etal. [2024]Alexandros Graikos, Srikar Yellapragada, Minh-Quan Le, Saarthak Kapse, Prateek Prasanna, Joel Saltz, and Dimitris Samaras.Learned representation-guided diffusion models for large-image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8532–8542, 2024.
- Gupta etal. [2019]Agrim Gupta, Piotr Dollar, and Ross Girshick.Lvis: A dataset for large vocabulary instance segmentation.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5356–5364, 2019.
- He etal. [2016]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition.In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Ho etal. [2020]Jonathan Ho, Ajay Jain, and Pieter Abbeel.Denoising diffusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020.
- Ho etal. [2022]Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, DiederikP Kingma, Ben Poole, Mohammad Norouzi, DavidJ Fleet, etal.Imagen video: High definition video generation with diffusion models.arXiv preprint arXiv:2210.02303, 2022.
- Khanna etal. [2023]Samar Khanna, Patrick Liu, Linqi Zhou, Chenlin Meng, Robin Rombach, Marshall Burke, DavidB Lobell, and Stefano Ermon.Diffusionsat: A generative foundation model for satellite imagery.In The Twelfth International Conference on Learning Representations, 2023.
- Kisantal [2019]Mate Kisantal.Augmentation for small object detection.arXiv preprint arXiv:1902.07296, 2019.
- Li etal. [2020]Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, and Junwei Han.Object detection in optical remote sensing images: A survey and a new benchmark.ISPRS journal of photogrammetry and remote sensing, 159:296–307, 2020.
- Li etal. [2023]Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and YongJae Lee.Gligen: Open-set grounded text-to-image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22511–22521, 2023.
- Li etal. [2024]Yuhang Li, Xin Dong, Chen Chen, Weiming Zhuang, and Lingjuan Lyu.A simple background augmentation method for object detection with diffusion model.arXiv preprint arXiv:2408.00350, 2024.
- Li etal. [2021]Zejian Li, Jingyu Wu, Immanuel Koh, Yongchuan Tang, and Lingyun Sun.Image synthesis from layout with locality-aware mask adaption.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13819–13828, 2021.
- Liu etal. [2024]Fan Liu, Delong Chen, Zhangqingyun Guan, Xiaocong Zhou, Jiale Zhu, Qiaolin Ye, Liyong Fu, and Jun Zhou.Remoteclip: A vision language foundation model for remote sensing.IEEE Transactions on Geoscience and Remote Sensing, 2024.
- Liu etal. [2016a]Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and AlexanderC Berg.Ssd: Single shot multibox detector.In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pages 21–37. Springer, 2016a.
- Liu etal. [2016b]Zikun Liu, Hongzhen Wang, Lubin Weng, and Yiping Yang.Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds.IEEE geoscience and remote sensing letters, 13(8):1074–1078, 2016b.
- Loshchilov [2017]I Loshchilov.Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101, 2017.
- Mildenhall etal. [2021]Ben Mildenhall, PratulP Srinivasan, Matthew Tancik, JonathanT Barron, Ravi Ramamoorthi, and Ren Ng.Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65(1):99–106, 2021.
- Nichol and Dhariwal [2021]AlexanderQuinn Nichol and Prafulla Dhariwal.Improved denoising diffusion probabilistic models.In International conference on machine learning, pages 8162–8171. PMLR, 2021.
- Pang etal. [2024]Li Pang, Datao Tang, Shuang Xu, Deyu Meng, and Xiangyong Cao.Hsigene: A foundation model for hyperspectral image generation.arXiv preprint arXiv:2409.12470, 2024.
- Podell etal. [2023]Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach.Sdxl: Improving latent diffusion models for high-resolution image synthesis.arXiv preprint arXiv:2307.01952, 2023.
- Radford etal. [2021]Alec Radford, JongWook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, etal.Learning transferable visual models from natural language supervision.In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021.
- Ravuri and Vinyals [2019]Suman Ravuri and Oriol Vinyals.Classification accuracy score for conditional generative models.Advances in neural information processing systems, 32, 2019.
- Reed etal. [2016]Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.Generative adversarial text to image synthesis.In International conference on machine learning, pages 1060–1069. PMLR, 2016.
- Rombach etal. [2022]Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer.High-resolution image synthesis with latent diffusion models.In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
- Sun and Wu [2019]Wei Sun and Tianfu Wu.Image synthesis from reconfigurable layout and style.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10531–10540, 2019.
- Tang etal. [2024]Datao Tang, Xiangyong Cao, Xingsong Hou, Zhongyuan Jiang, Junmin Liu, and Deyu Meng.Crs-diff: Controllable remote sensing image generation with diffusion model.IEEE Transactions on Geoscience and Remote Sensing, 2024.
- Toker etal. [2024]Aysim Toker, Marvin Eisenberger, Daniel Cremers, and Laura Leal-Taixé.Satsynth: Augmenting image-mask pairs through diffusion models for aerial semantic segmentation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 27695–27705, 2024.
- Varghese and Sambath [2024]Rejin Varghese and M Sambath.Yolov8: A novel object detection algorithm with enhanced performance and robustness.In 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), pages 1–6. IEEE, 2024.
- Wang etal. [2024]Xudong Wang, Trevor Darrell, SaiSaketh Rambhatla, Rohit Girdhar, and Ishan Misra.Instancediffusion: Instance-level control for image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6232–6242, 2024.
- Wang etal. [2023]Zhizhong Wang, Lei Zhao, and Wei Xing.Stylediffusion: Controllable disentangled style transfer via diffusion models.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7677–7689, 2023.
- Xie etal. [2021]Xingxing Xie, Gong Cheng, Jiabao Wang, Xiwen Yao, and Junwei Han.Oriented r-cnn for object detection.In Proceedings of the IEEE/CVF international conference on computer vision, pages 3520–3529, 2021.
- Yang etal. [2023]Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, etal.Reco: Region-controlled text-to-image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14246–14255, 2023.
- Zhang etal. [2017]Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and DimitrisN Metaxas.Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks.In Proceedings of the IEEE international conference on computer vision, pages 5907–5915, 2017.
- Zhang etal. [2020]Xin Zhang, Liangxiu Han, Lianghao Han, and Liang Zhu.How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery?Remote Sensing, 12(3):417, 2020.
- Zhao etal. [2023]Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, etal.X-paste: Revisiting scalable copy-paste for instance segmentation using clip and stablediffusion.In International Conference on Machine Learning, pages 42098–42109. PMLR, 2023.
- Zheng etal. [2023]Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li.Layoutdiffusion: Controllable diffusion model for layout-to-image generation.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22490–22499, 2023.
- Zheng etal. [2021]Zhuo Zheng, Yanfei Zhong, Junjue Wang, Ailong Ma, and Liangpei Zhang.Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to man-made disasters.Remote Sensing of Environment, 265:112636, 2021.
- Zheng etal. [2024]Zhuo Zheng, Stefano Ermon, Dongjun Kim, Liangpei Zhang, and Yanfei Zhong.Changen2: Multi-temporal remote sensing generative change foundation model.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
- Zhou etal. [2024]Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, and Yi Yang.Migc: Multi-instance generation controller for text-to-image synthesis.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6818–6828, 2024.
- Zhu etal. [2024]Jingyuan Zhu, Shiyu Li, Yuxuan Liu, Ping Huang, Jiulong Shan, Huimin Ma, and Jian Yuan.Odgen: Domain-specific object detection data generation with diffusion models.arXiv preprint arXiv:2405.15199, 2024.
- Zou etal. [2023]Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, and Jieping Ye.Object detection in 20 years: A survey.Proceedings of the IEEE, 111(3):257–276, 2023.