Last updated: March 16, 2026

When managing infrastructure at scale, you often encounter a common challenge: existing cloud resources that were created manually or through other means need to be brought under Terraform management. The traditional approach requires manually writing import blocks—a tedious process that involves looking up resource IDs, identifying the correct import syntax, and ensuring the generated Terraform code matches the existing configuration. This is where AI coding assistants can significantly accelerate your workflow.

This guide shows you how to use AI tools to generate Terraform import blocks for existing resources, reducing hours of manual work to minutes.

Table of Contents

Why Import Blocks Matter in Terraform

Terraform import blocks allow you to bring existing infrastructure under Terraform’s control without destroying and recreating resources. This is essential for:

The import block syntax in Terraform looks like this:

import {
  to = aws_instance.example
  id = "i-0abc123456789def0"
}

For each resource you need to import, you must know the resource type, the target resource address, and the unique identifier from your cloud provider. When managing hundreds of resources across multiple accounts, this becomes overwhelming quickly.

Which AI Tools Work Best for Terraform Import Blocks

Several AI coding assistants handle Terraform generation well, each with different strengths:

For bulk Terraform import generation, Claude and Amazon Q tend to produce the fewest errors because they have deeper infrastructure-specific training. For quick one-off imports while already in VS Code, Copilot’s in-editor context wins on convenience.

Using AI to Generate Import Blocks

AI assistants can help in several ways: identifying what resources exist in your cloud environment, determining the correct resource types, and generating the appropriate import blocks. Here’s how to approach this effectively.

Step 1: Gather Your Resource Information

Before asking AI for help, collect basic information about the resources you want to import:

For AWS, you might run:

aws ec2 describe-instances --query 'Reservations[*].Instances[*].InstanceId' --output text
aws s3api list-buckets --query 'Buckets[].Name' --output text
aws rds describe-db-instances --query 'DBInstances[*].DBInstanceIdentifier' --output text

For GCP resources, use:

gcloud compute instances list --format="value(name,selfLink)"
gcloud sql instances list --format="value(name)"

Provide this information to your AI assistant as context.

Step 2: Craft an Effective Prompt

The quality of AI-generated import blocks depends heavily on how you frame your request. Here’s an effective prompt structure:

“Generate Terraform import blocks for the following AWS resources. Use the current Terraform AWS provider syntax. For each resource, create an import block and the corresponding resource definition:

Format the output as complete Terraform code that can be applied directly.”

Step 3: Review AI-Generated Code Carefully

AI generates correct import blocks most of the time, but you must verify the output. Check these common issues:

Resource type accuracy: Ensure the AI chose the correct resource type. For example, aws_instance for EC2, not aws_ec2_instance (the legacy naming).

Identifier format: Some resources require compound identifiers. An S3 bucket import might need:

import {
  to = aws_s3_bucket.example
  id = "my-bucket-name"
}

But an EC2 instance with a specific subnet might need:

import {
  to = aws_instance.example
  id = "i-0abc123456789def0"
}

Provider configuration: Verify the provider in the generated code matches your setup. AWS resources might need:

provider "aws" {
  region = "us-east-1"
}

Practical Example: Importing an AWS VPC

Suppose you have an existing VPC with ID vpc-0a1b2c3d4e5f6g7h8 that you need to manage with Terraform. Here’s how to work with AI to generate the import:

Ask your AI assistant:

Generate Terraform code to import an existing AWS VPC with ID vpc-0a1b2c3d4e5f6g7h8. Include:
1. The import block
2. The resource definition with minimal required arguments
3. Use us-east-1 as the region

The AI should produce:

import {
  to = aws_vpc.main
  id = "vpc-0a1b2c3d4e5f6g7h8"
}

resource "aws_vpc" "main" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "main-vpc"
  }
}

After generating this code, run terraform plan to verify Terraform can read the existing VPC and reconcile any differences between your configuration and the actual state.

Advanced AI Strategies for Bulk Imports

When dealing with dozens or hundreds of resources, adjust your approach:

Batch by resource type: Group similar resources together in your prompts. Ask for all EC2 instances first, then S3 buckets, then RDS instances. This reduces confusion and makes the AI’s output more organized.

Use state data sources: For resources already tracked in partial Terraform state, use data sources to read current configuration:

data "aws_instance" "existing" {
  instance_id = "i-0abc123456789def0"
}

import {
  to = aws_instance.managed
  id = data.aws_instance.existing.id
}

Cross-reference documentation: When AI generates import blocks for unfamiliar resource types, ask it to include comments referencing the official Terraform provider documentation.

Automating Resource Discovery Before Prompting

Instead of manually running CLI commands, automate resource discovery with a script that feeds directly into your AI prompt:

import subprocess
import json

def gather_aws_resources():
    """Collect AWS resource IDs to feed into an AI import prompt."""
    resources = {}

    # EC2 instances
    result = subprocess.run(
        ["aws", "ec2", "describe-instances",
         "--query", "Reservations[*].Instances[*].{id:InstanceId,name:Tags[?Key=='Name']|[0].Value}",
         "--output", "json"],
        capture_output=True, text=True
    )
    instances = json.loads(result.stdout)
    resources["ec2_instances"] = [i for sublist in instances for i in sublist]

    # S3 buckets
    result = subprocess.run(
        ["aws", "s3api", "list-buckets", "--query", "Buckets[].Name", "--output", "json"],
        capture_output=True, text=True
    )
    resources["s3_buckets"] = json.loads(result.stdout)

    return resources

resources = gather_aws_resources()
print(json.dumps(resources, indent=2))

Feed this output directly into your AI prompt. With resource names already in JSON format, Claude or ChatGPT can generate properly named Terraform resources with meaningful identifiers rather than generic example labels.

Validating AI Output with terraform plan

Never apply AI-generated import blocks without running terraform plan -generate-config-out=generated.tf first. This Terraform 1.5+ feature generates resource configurations automatically based on what the provider discovers:

# Write the import block to a file
cat > imports.tf << 'EOF'
import {
  to = aws_vpc.main
  id = "vpc-0a1b2c3d4e5f6g7h8"
}
EOF

# Let Terraform generate the matching resource config
terraform plan -generate-config-out=generated.tf

# Review the generated file before applying
cat generated.tf
terraform apply

This approach combines AI-generated import blocks with Terraform’s native config generation, giving you a double-verification pass. The AI handles the import block structure; Terraform fills in the exact resource attributes from your live infrastructure.

Limitations to Understand

AI tools have boundaries you should recognize:

Best Practices for AI-Assisted Imports

  1. Always run terraform plan before applying import blocks to verify what will change

  2. Use separate import files per resource type or environment for organization

  3. Add meaningful tags to imported resources for better management

  4. Version your provider explicitly to avoid unexpected behavior changes

  5. Backup your state before running imports on production resources

  6. Iterate with the AI — if the first output has errors, paste the error message back and ask for a correction

Frequently Asked Questions

How long does it take to use ai to generate terraform import blocks?

For a straightforward setup, expect 30 minutes to 2 hours depending on your familiarity with the tools involved. Complex configurations with custom requirements may take longer. Having your credentials and environment ready before starting saves significant time.

What are the most common mistakes to avoid?

The most frequent issues are skipping prerequisite steps, using outdated package versions, and not reading error messages carefully. Follow the steps in order, verify each one works before moving on, and check the official documentation if something behaves unexpectedly.

Do I need prior experience to follow this guide?

Basic familiarity with the relevant tools and command line is helpful but not strictly required. Each step is explained with context. If you get stuck, the official documentation for each tool covers fundamentals that may fill in knowledge gaps.

Can I adapt this for a different tech stack?

Yes, the underlying concepts transfer to other stacks, though the specific implementation details will differ. Look for equivalent libraries and patterns in your target stack. The architecture and workflow design remain similar even when the syntax changes.

Where can I get help if I run into issues?

Start with the official documentation for each tool mentioned. Stack Overflow and GitHub Issues are good next steps for specific error messages. Community forums and Discord servers for the relevant tools often have active members who can help with setup problems.

Built by theluckystrike — More at zovo.one