@jialin.huang
FRONT-ENDBACK-ENDNETWORK, HTTPOS, COMPUTERCLOUD, AWS, Docker
To live is to risk it all Otherwise you are just an inert chunk of randomly assembled molecules drifting wherever the Universe blows you

© 2024 jialin00.com

Original content since 2022

back
RSS

Implementing a Go HTTP Server and Websocket with Fargate and ECR

This article explores AWS Elastic Container Service (ECS), focusing on Fargate and ECR. While we use a simple Go HTTP server as an example, our main goal is to understand ECS deployment processes. The Golang code serves just as a vehicle to create a basic container image. We'll cover key steps from local development to cloud deployment, highlighting the intricacies of AWS container services.

If You Know About Docker (It's Super Simple!)

You simply upload your image to Docker, pull it down when needed, and can also pull other software you need, then docker run.

docker build -t my-app .
docker tag my-app:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/my-app:latest
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/my-app:latest

Similarly, above is aws version of docker flow.

After that: two more step for serving your image.

  1. Create a Task Definition in ECS:

    Link Specify the ECR image: 123456789.dkr.ecr.us-west-2.amazonaws.com/my-app:latest

  1. Launch a Service or Task and choose the task definition (docker run)

Done!

Go Build Configuration Notes

Just a super normal Dockerfile here. And we gonna talk about why need these options:

  1. CGO_ENABLED
  1. GOARCH
# first
FROM golang:1.22 AS builder

WORKDIR /app

COPY go.mod go.sum ./
RUN go mod download

COPY . .

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o main .

# then
FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/

# get result from the first stage.
COPY --from=builder /app/main .

CMD ["./main"]

WHY need CGO_ENABLED=0 ?

This means "No C library, I use my OWN Go implementation for lower layer"

Go will use its own pure Go implementation to handle low-level operations.

Not relying on the system's C library reduces problems caused by different versions of C libraries on different systems.

  • The Go compiler will use the language's built-in cross-platform abstraction layer.
  • These abstraction layers have specific implementations on different operating systems, ensuring cross-platform compatibility.

However, in scenarios that rely more on low-level interactions, it might be preferable to use true.

Building the image might increase the file size a bit, but it gains higher cross-platform compatibility.

Why need GOARCH=amd64 ?

RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

# v.s.
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o main .

This is because when creating a Task Definition in ECS, you'll be asked to choose the Operating system/Architecture. You'll find that ECS Fargate itself has architectural limitations.

You can try it out. If you don't specify GOARCH during go build, when you try to run a new task using this image, it will cause an exec ./main: exec format error.

Findings and Thoughts

Question 1: Capacity Provider Strategy vs. Launch Type

Launch Type

Intuitive. It's simply choosing whether you want to add a Fargate, EC2, or Fargate SPOT instance.

Nothing to concern about this selection.

Capacity Provider Strategy (more complicated)

You can configure whether to use Fargate or Fargate Spot yourself.

The pricing is EC2 > Fargate > Fargate Spot, of course, Fargate Spot comes with the risk of potential interruptions.

(Similar to EC2 having on-demand > reserved instance > spot instance, you make cost configurations based on your needs)

Note that when combining Fargate/Fargate Spot, there are two parameters we should know:

  1. base: the base of one of them should remain 0.
  1. weight: mechanism actually works like this
  • First, it satisfies the base requirements of all Capacity Providers.
  • The remaining tasks are then distributed according to the weight ratio.

For example, let's say we have a strategy with two providers:

If we need to run 12 tasks:

UnitFargateFargate Spot
base30
weight12
result6 (3+3)6 (0+6)
  1. First, it fulfills the base: 3 task goes to Fargate.
  1. For the remaining 9 tasks, it uses the weight ratio (1:2)

This strategy allows you to balance cost optimization (using more Spot instances) with stability (ensuring a base on regular Fargate).”

The impact of Capacity Provider Strategy on the next step:

  • For Tasks, it mainly affects how many resources are initially placed where, like 70% for FARGATE SPOT and 30% for FARGATE. Tasks themselves don't Auto Scale based on conditions.
  • For Services, it determines the initial resource placement and will expand or contract as needed.

In simple terms, it has no effect on Tasks. For Service, it only knows how many units to add or remobe based on auto-scaling decisions, and then uses the initial Capacity Provider Strategy to make the final compute resource allocation.

Question 2: Decide to run Service or Task?

If you need to integrate with other services, require long-running operations, or need load balancing, auto-scaling, microservice architecture, etc., choose Service.

Tasks are more suitable for cronjobs, batch processing, one-time processing, build tasks in CI/CD workflows, and other short-term, on-demand scenarios.

Question 3: Capacity Provider Strategy vs. ALB - Are They Similar?

Capacity Provider Strategy has a traffic-like function similar to ALB, but if you want more fine-grained management or more comprehensive features, it's recommended to use ALB.

When should you use ALB:

  1. When you need SSL binding or high integration with other AWS services.
  1. When you don't want others to directly access the task's public IP, and you want your product to be more elegant. You only want to expose the ALB DNS to the outside, especially considering that task IPs may not be fixed.

Of course, ECS can be linked with ALB when adding tasks, but how do they work?

The communication between ECS and ALB is based on the intermediate target group. ALB itself can correspond to one or more target groups to distinguish, for example:

  • domain.app/api → go target group backend
  • domain.app/root → go target group landing-page.

It matches the services under the target group according to my path situation.

When creating a Service in ECS, the Load Balancing settings section will tell you that you can assign a target group for this service, which is the step that registers tasks to ALB.

Thought 1: ECS Task Definition vs. EC2 AMI, Both Seem Like Blueprints

Indeed, both define the work content in advance, but the difference is that Task Definition doesn't have as large a management scope as EC2 AMI. EC2 AMI involves requirements for systems, processes, and resource configurations to facilitate future machine startup. Task Definition covers less and has simpler options.

Task Definition is for containers, EC2 AMI is for VMs.

Task definition is faster, allows for more granular resource allocation, and launches corresponding compute resources more quickly.

Task definition must be created before creating a service or task,

but AMI is not necessary to create your own instance.

Thought 2: ECS Running Task or Service Are Just UX Things

Whether you choose to Create Service or Run new Task at the beginning, entering the panel and changing to task or services, it doesn't seem to matter which button you click to enter. It just presets based on where you came from.
The settings are almost the same, only differing in optional features. For example, if you choose Task, you won't have Auto Scaling and ALB functions. I think this design of service or task is just for user perception.
Even when adding a Task Definition, the initial options don't have to be followed strictly. For example:

Example 1: Found that if my Task Definition is selected as Instance for Launch type.

But when deciding to launch (Task or Service), you can also choose Fargate as the Launch type.

It won't care if your choice matches the bound Task Definition.

Example 2: If I use the add service method to execute a task, say 3 replicas, but once it actually starts, those three replicas will correspond to three newly added tasks.

It's worth considering that tasks are the actual running units, while services are more like logical abstractions, similar to k8s Service resources

ECS UnitKubernetes Unit
TaskPod
ServiceService

For AWS, it's all about creating compute resources in the end, so no matter how complex or confusing the frontend might be, there's not much to worry about.

Thought 3: Maybe we can do better

If you want a nicer domain name, you can choose Route53 to point directly to the ALB DNS. And you can just simple put your Certificate on your ALB.

  1. add an HTTPS listener on port 443 and attach "your domain" certificate.
  1. configure the HTTP listener on port 80 to redirect to HTTPS.
  1. set up Route53 by creating an A record alias for "your domain" pointing to your ALB's DNS name.

That's it, you're done. You get a nicer domain name and HTTPS!

Reference

https://stackoverflow.com/questions/73285601/docker-exec-usr-bin-sh-exec-format-error

https://docs.docker.com/build/building/multi-platform/

https://gist.github.com/asukakenji/f15ba7e588ac42795f421b48b8aede63

https://dev.to/metal3d/understand-how-to-use-c-libraries-in-go-with-cgo-3dbn

EOF