Skip to content

Deployment Guide

This guide provides instructions for deploying EVOSEAL in various environments.

Table of Contents

Prerequisites

  • Python 3.10+
  • pip (Python package manager)
  • Git
  • (Optional) Docker and Docker Compose
  • (Optional) Kubernetes cluster (for production)

Systemd Service Setup

EVOSEAL can be run as a systemd service for continuous operation. Here's how to set it up:

1. Create Environment File

Copy the template environment file and customize it if needed:

cp .evoseal.env.template .evoseal.env
# Edit .evoseal.env to customize settings

2. Install the Service

Copy the service file to the systemd directory and enable it:

# Copy service file
sudo cp scripts/evoseal.service /etc/systemd/system/

# Reload systemd
sudo systemctl daemon-reload

# Enable and start the service
sudo systemctl enable --now evoseal.service

3. Verify Service Status

Check if the service is running:

sudo systemctl status evoseal.service

4. View Logs

To view the logs:

# Follow logs in real-time
sudo journalctl -u evoseal.service -f

# View full logs
sudo journalctl -u evoseal.service --no-pager

Local Development

1. Clone the Repository

git clone https://github.com/SHA888/EVOSEAL.git
cd EVOSEAL

2. Set Up Virtual Environment

python -m venv venv
source .venv/bin/activate  # On Windows: venv\Scripts\activate

3. Install Dependencies

pip install -r requirements.txt

4. Configure Environment Variables

Create a .env file in the project root:

# Required
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key

# Optional
LOG_LEVEL=INFO
CACHE_DIR=./.cache

5. Run the Application

python -m evoseal.cli

Docker Deployment

1. Build the Docker Image

docker build -t evoseal:latest .

2. Run the Container

docker run -d \
  --name evoseal \
  -p 8000:8000 \
  --env-file .env \
  evoseal:latest

3. Using Docker Compose

Create a docker-compose.yml file:

version: '3.8'

services:
  evoseal:
    build: .
    ports:
      - "8000:8000"
    env_file:
      - .env
    volumes:
      - ./cache:/app/.cache
    restart: unless-stopped

Then run:

docker-compose up -d

Kubernetes Deployment

1. Create a Namespace

kubectl create namespace evoseal

2. Create a Secret for Environment Variables

kubectl create secret generic evoseal-secrets \
  --namespace=evoseal \
  --from-env-file=.env

3. Deploy the Application

Create a deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: evoseal
  namespace: evoseal
spec:
  replicas: 3
  selector:
    matchLabels:
      app: evoseal
  template:
    metadata:
      labels:
        app: evoseal
    spec:
      containers:
      - name: evoseal
        image: your-registry/evoseal:latest
        ports:
        - containerPort: 8000
        envFrom:
        - secretRef:
            name: evoseal-secrets
        resources:
          limits:
            cpu: "1"
            memory: "1Gi"
          requests:
            cpu: "500m"
            memory: "512Mi"

4. Create a Service

apiVersion: v1
kind: Service
metadata:
  name: evoseal
  namespace: evoseal
spec:
  selector:
    app: evoseal
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  type: LoadBalancer

5. Apply the Configuration

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Serverless Deployment

AWS Lambda

  1. Install the AWS SAM CLI
  2. Create a template.yaml
  3. Deploy using SAM
sam build
sam deploy --guided

Configuration Management

Environment Variables

Variable Required Default Description
OPENAI_API_KEY Yes - Your OpenAI API key
ANTHROPIC_API_KEY Yes - Your Anthropic API key
LOG_LEVEL No INFO Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
CACHE_DIR No ./.cache Directory to store cache files

Configuration Files

EVOSEAL supports configuration through YAML files. The default location is config/config.yaml.

Scaling

Horizontal Scaling

  • Use Kubernetes HPA (Horizontal Pod Autoscaler)
  • Set appropriate resource requests and limits
  • Use a message queue for task distribution

Caching

  • Enable caching for API responses
  • Use Redis or Memcached for distributed caching

Monitoring

Logging

  • Configure log aggregation (ELK, Loki, etc.)
  • Set up log rotation

Metrics

  • Expose Prometheus metrics
  • Set up Grafana dashboards
  • Monitor error rates and latency

Alerting

  • Set up alerts for errors and performance issues
  • Use tools like Prometheus Alertmanager

Backup and Recovery

Data Backup

  • Regularly back up your database
  • Test restoration procedures
  • Store backups in multiple locations

Disaster Recovery

  • Have a disaster recovery plan
  • Test failover procedures
  • Document recovery steps

Security Considerations

Network Security

  • Use TLS/SSL for all communications
  • Implement network policies
  • Use a WAF (Web Application Firewall)

Access Control

  • Implement proper authentication and authorization
  • Use role-based access control (RBAC)
  • Rotate API keys regularly

Data Protection

  • Encrypt sensitive data at rest and in transit
  • Implement proper key management
  • Follow the principle of least privilege

Upgrading

  1. Check the release notes for breaking changes
  2. Backup your data
  3. Test the upgrade in a staging environment
  4. Deploy to production during a maintenance window

Troubleshooting

Common Issues

  1. API Connection Issues
  2. Check your API keys
  3. Verify network connectivity
  4. Check rate limits

  5. Performance Problems

  6. Check resource utilization
  7. Review query performance
  8. Check for memory leaks

  9. Deployment Failures

  10. Check container logs
  11. Verify configuration
  12. Check resource quotas

Support

For additional help, please open an issue.


Last update: 2025-07-22
Created: 2025-06-17