From oraclecloud-pack
Deploys containers to OCI using OKE Kubernetes clusters or Container Instances. Pushes images to OCIR and provisions clusters via Python SDK.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin oraclecloud-packThis skill is limited to using the following tools:
Deploy containerized applications to OCI using either OKE (Oracle Kubernetes Engine) or Container Instances. OKE provides full Kubernetes but requires 4x more config than EKS — you need a VCN, subnet, node pool, OCIR registry, and IAM policies before a single pod runs. Container Instances offer a simpler serverless alternative for workloads that don't need Kubernetes orchestration.
Sets up local OCI CLI/SDK dev workflow with multi-profile config, shell aliases, and env vars for dev/staging/prod. Replaces slow web console for resource ops.
Deploys, manages, and scales containerized apps on Kubernetes clusters using best practices for production workloads, resource management, rolling updates, health checks, and RBAC.
Deploys containerized apps to Kubernetes clusters via kubectl manifests for Deployments, Services, ConfigMaps, Secrets, Ingress. Adds health checks, resource limits, rolling updates, Helm charts for EKS, GKE, AKS, Docker Compose migrations.
Share bugs, ideas, or general feedback.
Deploy containerized applications to OCI using either OKE (Oracle Kubernetes Engine) or Container Instances. OKE provides full Kubernetes but requires 4x more config than EKS — you need a VCN, subnet, node pool, OCIR registry, and IAM policies before a single pod runs. Container Instances offer a simpler serverless alternative for workloads that don't need Kubernetes orchestration.
Purpose: Get containers running on OCI through both the full Kubernetes path (OKE) and the simpler Container Instances path, with working manifests and registry auth.
~/.oci/configpip install oci for SDK-based provisioningOracle Cloud Infrastructure Registry (OCIR) is OCI's Docker-compatible registry. Auth uses an OCI auth token, not your API key:
# Generate an auth token: Console > Profile > Auth Tokens > Generate Token
# Save the token — it's only shown once
# Login to OCIR (format: {region-key}.ocir.io/{namespace})
docker login us-ashburn-1.ocir.io
# Username: {tenancy-namespace}/oracleidentitycloudservice/{email}
# Password: your auth token
# Tag and push
docker tag myapp:latest us-ashburn-1.ocir.io/{namespace}/myapp:latest
docker push us-ashburn-1.ocir.io/{namespace}/myapp:latest
Use the OCI Python SDK to provision an OKE cluster programmatically:
import oci
config = oci.config.from_file("~/.oci/config")
container_engine = oci.container_engine.ContainerEngineClient(config)
# Create cluster
create_cluster_response = container_engine.create_cluster(
oci.container_engine.models.CreateClusterDetails(
name="my-oke-cluster",
compartment_id="ocid1.compartment.oc1..example",
vcn_id="ocid1.vcn.oc1..example",
kubernetes_version="v1.28.2",
options=oci.container_engine.models.ClusterCreateOptions(
service_lb_subnet_ids=["ocid1.subnet.oc1..example-public"],
kubernetes_network_config=oci.container_engine.models.KubernetesNetworkConfig(
pods_cidr="10.244.0.0/16",
services_cidr="10.96.0.0/16"
)
)
)
)
cluster_id = create_cluster_response.headers["opc-work-request-id"]
print(f"Cluster creation initiated: {cluster_id}")
OKE clusters need at least one node pool to schedule workloads:
container_engine.create_node_pool(
oci.container_engine.models.CreateNodePoolDetails(
compartment_id="ocid1.compartment.oc1..example",
cluster_id="ocid1.cluster.oc1..example",
name="pool-1",
kubernetes_version="v1.28.2",
node_shape="VM.Standard.E4.Flex",
node_shape_config=oci.container_engine.models.CreateNodeShapeConfigDetails(
ocpus=2.0,
memory_in_gbs=16.0
),
node_config_details=oci.container_engine.models.CreateNodePoolNodeConfigDetails(
size=3,
placement_configs=[
oci.container_engine.models.NodePoolPlacementConfigDetails(
availability_domain="Uocm:US-ASHBURN-AD-1",
subnet_id="ocid1.subnet.oc1..example-private"
)
]
)
)
)
# Install the OCI CLI and set up kubeconfig
oci ce cluster create-kubeconfig \
--cluster-id ocid1.cluster.oc1..example \
--file ~/.kube/config \
--region us-ashburn-1 \
--token-version 2.0.0
# Verify connectivity
kubectl get nodes
Create a Kubernetes deployment manifest with OCIR image pull secret:
# Create OCIR pull secret
kubectl create secret docker-registry ocir-secret \
--docker-server=us-ashburn-1.ocir.io \
--docker-username='{namespace}/oracleidentitycloudservice/{email}' \
--docker-password='{auth-token}' \
--docker-email='{email}'
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
imagePullSecrets:
- name: ocir-secret
containers:
- name: myapp
image: us-ashburn-1.ocir.io/{namespace}/myapp:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "500m"
memory: "1Gi"
---
apiVersion: v1
kind: Service
metadata:
name: myapp-svc
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
For workloads that don't need Kubernetes, Container Instances provide a serverless option:
import oci
config = oci.config.from_file("~/.oci/config")
ci_client = oci.container_instances.ContainerInstanceClient(config)
ci_client.create_container_instance(
oci.container_instances.models.CreateContainerInstanceDetails(
compartment_id="ocid1.compartment.oc1..example",
display_name="myapp-instance",
availability_domain="Uocm:US-ASHBURN-AD-1",
shape="CI.Standard.E4.Flex",
shape_config=oci.container_instances.models.CreateContainerInstanceShapeConfigDetails(
ocpus=1.0,
memory_in_gbs=4.0
),
containers=[
oci.container_instances.models.CreateContainerDetails(
image_url="us-ashburn-1.ocir.io/{namespace}/myapp:latest",
display_name="myapp"
)
],
vnics=[
oci.container_instances.models.CreateContainerVnicDetails(
subnet_id="ocid1.subnet.oc1..example"
)
]
)
)
print("Container Instance created")
Successful completion produces:
| Error | Code | Cause | Solution |
|---|---|---|---|
| NotAuthenticated | 401 | Bad auth token for OCIR or wrong API key | Regenerate auth token in Console > Profile > Auth Tokens |
| NotAuthorizedOrNotFound | 404 | Missing IAM policy for container engine | Add policy: Allow group Developers to manage cluster-family in compartment X |
| TooManyRequests | 429 | Rate limited on cluster operations | Wait and retry — OKE control plane ops are rate-limited |
| ImagePullBackOff | N/A | Wrong OCIR secret or image path | Verify docker-server, namespace, and image tag in pull secret |
| InternalError | 500 | OCI service issue | Check OCI Status and retry |
| Node pool stuck CREATING | N/A | Insufficient capacity in AD | Try a different availability domain or shape |
Quick Container Instance deploy with OCI CLI:
# List running container instances
oci container-instances container-instance list \
--compartment-id ocid1.compartment.oc1..example
# Get cluster kubeconfig in one command
oci ce cluster create-kubeconfig \
--cluster-id ocid1.cluster.oc1..example \
--file ~/.kube/config \
--region us-ashburn-1 \
--token-version 2.0.0 && kubectl get pods
Verify OCIR image exists:
import oci
config = oci.config.from_file("~/.oci/config")
artifacts = oci.artifacts.ArtifactsClient(config)
images = artifacts.list_container_images(
compartment_id="ocid1.compartment.oc1..example",
repository_name="myapp"
).data
for img in images.items:
print(f"{img.display_name} — {img.time_created}")
After deployment is working, proceed to oraclecloud-observability to set up monitoring and alerting for your running workloads, or see oraclecloud-performance-tuning to optimize your shape and storage choices.