FLUX.1 Dev with LoRA support is the most widely adopted image generator on inference.sh by user count. Built by Black Forest Labs, it combines the FLUX.1 Dev base model with the ability to load custom LoRA adapters for style-specific generation. This means you can generate images in any fine-tuned style - your brand aesthetic, a specific artistic technique, a trained character - without hosting your own model. Available at app.inference.sh/apps/falai/flux-dev-lora at $0.035 per megapixel.
what it does
FLUX Dev LoRA generates images from text prompts using Black Forest Labs' FLUX.1 Dev architecture with optional LoRA (Low-Rank Adaptation) weights loaded at generation time. LoRAs are small model adapters trained on specific styles, subjects, or concepts that modify the base model's output without changing its core capabilities.
The practical result: you get a state-of-the-art diffusion model that can switch styles on demand. Load a LoRA trained on anime aesthetics for one request, a photorealistic portrait LoRA for the next, and a product photography LoRA after that. The base model handles composition, physics, and coherence while the LoRA steers the visual style.
The app also supports image-to-image generation, where you provide a source image and the model transforms it according to your prompt and loaded LoRAs. A strength parameter controls how much the output deviates from the input.
key features
LoRA loading - Specify one or more LoRA adapters by URL with individual scale weights. Mix multiple LoRAs in a single generation for combined style effects.
Full diffusion control - Exposed parameters for guidance scale (CFG), inference steps, seed, and dimensions. Full control over the generation process for reproducible, tunable results.
Image-to-image - Transform existing images with style transfer, subject modification, or creative reinterpretation. The strength parameter controls transformation intensity.
Per-megapixel pricing - Pay based on output resolution rather than a flat per-image rate. Small images for thumbnails cost less than large images for print.
Flexible dimensions - Set width and height independently to any pixel value. Not locked to preset aspect ratios.
Safety checker - Optional content safety filtering that can be enabled or disabled based on your application requirements.
use cases
Brand-consistent content - Train a LoRA on your brand's visual style, then generate unlimited on-brand imagery. Marketing teams can produce assets that match their design language without manual creation.
Character consistency - Use character-trained LoRAs to generate the same character in different poses, scenes, and contexts. Essential for comics, games, marketing campaigns, and storytelling.
Style exploration - Load different LoRAs to see how your concept looks across various artistic styles. Compare photorealistic, illustrated, painterly, and graphic interpretations in minutes.
Product visualization - Fine-tune a LoRA on your product line, then generate it in any context - lifestyle settings, flat lays, environmental shots - without photography.
Art style replication - Train LoRAs on specific artistic techniques, historical art movements, or illustrative styles. Generate new compositions in those styles on demand.
Image-to-image workflows - Transform sketches into finished illustrations, apply consistent style to inconsistent source material, or iterate on compositions with controlled variation.
how to run
belt CLI
Basic text-to-image:
1belt app run falai/flux-dev-lora --prompt "Portrait of a woman in a sunlit greenhouse, golden hour light filtering through glass panels, surrounded by tropical plants, soft bokeh background"With a LoRA adapter:
1belt app run falai/flux-dev-lora --prompt "A mountain cabin in winter, snow-covered pines, warm light from windows, STYLENAME style" --loras '[{"path": "https://huggingface.co/user/lora-name/resolve/main/lora.safetensors", "scale": 0.8}]'With full parameter control:
1belt app run falai/flux-dev-lora --prompt "Macro photography of morning dew on a spider web, iridescent light refraction, dark blurred forest background" --width 1344 --height 768 --guidance_scale 7.5 --num_inference_steps 28 --seed 12345Image-to-image transformation:
1belt app run falai/flux-dev-lora --prompt "Transform into a detailed pencil sketch with cross-hatching, maintain composition" --image "https://example.com/photo.jpg" --strength 0.75Multiple LoRAs combined:
1belt app run falai/flux-dev-lora --prompt "Futuristic city skyline at dusk" --loras '[{"path": "https://example.com/cyberpunk-lora.safetensors", "scale": 0.6}, {"path": "https://example.com/cinematic-lora.safetensors", "scale": 0.4}]'API
1from inference import Client23client = Client()4result = client.run("falai/flux-dev-lora", {5 "prompt": "Editorial fashion photograph, model wearing avant-garde geometric clothing, stark white studio, dramatic directional lighting, high contrast",6 "width": 832,7 "height": 1216,8 "guidance_scale": 7.0,9 "num_inference_steps": 28,10 "loras": [11 {12 "path": "https://huggingface.co/user/fashion-editorial-v2/resolve/main/lora.safetensors",13 "scale": 0.8514 }15 ]16})Image-to-image with LoRA:
1result = client.run("falai/flux-dev-lora", {2 "prompt": "Same composition rendered as a watercolor painting, soft edges, paint bleeding effects",3 "image": "https://example.com/reference.jpg",4 "strength": 0.65,5 "guidance_scale": 7.5,6 "num_inference_steps": 307})input parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt | string | yes | Text description of the desired image |
| loras | array | no | LoRA adapters to load. Each entry: {"path": "url", "scale": 0.0-1.0} |
| width | integer | no | Output width in pixels. Default varies by model |
| height | integer | no | Output height in pixels. Default varies by model |
| guidance_scale | number | no | CFG scale controlling prompt adherence. Higher = more literal |
| num_inference_steps | integer | no | Number of denoising steps. More steps = higher quality, slower generation |
| seed | integer | no | Random seed for reproducible outputs |
| image | string | no | Input image URL for image-to-image workflows |
| strength | number | no | Image-to-image transformation intensity. 0 = no change, 1 = complete regeneration |
| num_images | integer | no | Number of images to generate. Default is 1 |
| output_format | enum | no | Output format for the generated image |
| enable_safety_checker | boolean | no | Enable content safety filtering |
output
The app returns an images array containing generated image files. Each element is a downloadable file reference. The output_meta field provides generation metadata including actual parameters used.
pricing
$0.035 per megapixel, rounded up. Pricing scales with output resolution:
| Dimensions | Megapixels | Cost per image |
|---|---|---|
| 512x512 | 0.26 MP | ~$0.01 |
| 768x1024 | 0.79 MP | ~$0.03 |
| 1024x1024 | 1.05 MP | ~$0.04 |
| 1344x768 | 1.03 MP | ~$0.04 |
| 1536x1536 | 2.36 MP | ~$0.09 |
This per-megapixel model makes FLUX Dev LoRA one of the most cost-effective generators on the platform for standard resolutions. Small images for web thumbnails or social previews cost under a penny each.
when to use flux dev lora vs alternatives
Choose FLUX Dev LoRA when you need custom style adaptation through LoRAs, want full control over the diffusion process, or need the best cost efficiency at standard resolutions.
Choose Gemini 3 Pro when you need language-model-level prompt understanding, image editing with multiple references, or Google Search grounding.
Choose Seedream 4.5 when you want high resolution (2K-4K) with minimal configuration and do not need LoRA support.
Choose Qwen Image 2 Pro when text rendering or infographic generation is your primary need.
Choose Grok Imagine Pro when you want simplicity without parameter tuning and need batch sizes up to 10.
FAQ
Where do I get LoRA files?
LoRAs are available from Hugging Face, Civitai, and other model hosting platforms. You can also train your own using tools like kohya_ss or ai-toolkit. The LoRA must be compatible with FLUX.1 Dev architecture. Provide the direct download URL to the .safetensors file.
Can I combine multiple LoRAs?
Yes. Pass multiple entries in the loras array, each with its own URL and scale weight. Reduce individual scales when combining (e.g., 0.5 + 0.5 rather than 1.0 + 1.0) to prevent over-saturation. The effects are additive.
What guidance_scale should I use?
Values between 3.5 and 8.0 work well for most prompts. Lower values (3.5-5.0) give the model more creative freedom. Higher values (6.0-8.0) enforce stricter prompt adherence but can reduce naturalness. Start at 7.0 and adjust based on results.
How many inference steps do I need?
20-30 steps is the practical range for FLUX Dev. Below 20, quality degrades noticeably. Above 30, improvements become marginal. Use 25-28 for production work. More steps increase generation time linearly.
What does the strength parameter do for image-to-image?
Strength controls how much the output can deviate from the input image. At 0.3, the output closely resembles the input with subtle style changes. At 0.7, the composition is preserved but significant visual transformation occurs. At 1.0, the input is essentially ignored. Start at 0.6-0.7 for style transfer workflows.