#1722452920
[ golang ]
Templ is now a recognized language on GitHub. Lets fucking Gooooo. Still the best HTML templating langue I have used. Django is also pretty good though, but not as good or as fast.
▄▄▄·▪ ▐ ▄ ▄▄▄ . ▐█ ▄███ •█▌▐█▀▄.▀· ██▀·▐█·▐█▐▐▌▐▀▀▪▄ ▐█▪·•▐█▌██▐█▌▐█▄▄▌ .▀ ▀▀▀▀▀ █▪ ▀▀▀
https://pine32.be - © pine32.be 2024
Welcome! - 58 total posts. [RSS]
A funny little cycle. [LATEST]
#1722452920
Templ is now a recognized language on GitHub. Lets fucking Gooooo. Still the best HTML templating langue I have used. Django is also pretty good though, but not as good or as fast.
#1709333288
Tested the backend of this blog on small VM on my desktop with both a native binary and Docker to compare performance. My desktop could have a lot of variance so this test is not that accurate. The VM has 2GB of ram and 4 cores allocated. Memory is not limiting factor as the webserver uses 20MB at most. All 4 cores are utilised but the one core is always more used then the other, I assume that this is the case because the webserver is database limited for the most part. SQLite is in some ways single threaded.
1 [||||||||||||||||||||||||||||||| 78.5%]
2 [|||||||||||||||||||||||||||||||| 80.2%]
3 [|||||||||||||||||||||||||||||||| 80.3%]
4 [||||||||||||||||||||||||||||||||||95.7%]
I used the latest version at the time of writing.
I did 3 runs of each, alternating between the two version to limit variation. I used a tool called bombardier with 20 concurrent connections for every 10 second run. The requests are made from the host machine to give the webserver the whole VM to itself. This does add extra overhead but it is the same for both versions, so for sake of comparison this is fine.
These are the averaged results for both versions.
Statistics Avg Stdev Max
Reqs/sec 262.03 152.02 913.95
Latency 75.22ms 53.44ms 0.93s
Latency Distribution
50% 71.05ms
75% 104.18ms
90% 127.14ms
95% 143.13ms
99% 188.12ms
HTTP codes:
1xx - 0, 2xx - 2643, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 3.15MB/s
Statistics Avg Stdev Max
Reqs/sec 260.70 150.30 962.85
Latency 76.59ms 53.97ms 0.95s
Latency Distribution
50% 70.67ms
75% 107.55ms
90% 135.69ms
95% 151.88ms
99% 197.22ms
HTTP codes:
1xx - 0, 2xx - 2622, 3xx - 0, 4xx - 0, 5xx - 0
others - 0
Throughput: 3.12MB/s
These results are really close with a very slight edge for the binary. Only 0.5% more requests per second and 1.78% lower latency on average. This is within the margin of error for this unstable setup. But even if we would take these results as 100% accurate I still think it is worth the small hit in performance. This small hit is probably even smaller when you are using a bigger server on more native hardware.
#1709055561
I am going to keep spreading Golang propaganda until everybody uses Go and then I will make my move to Elixir or OCaml.
#1707727407
SSE are cool.
This an example using the Echo framework in Golang. It sets up the connection, consumes a channel and sends that data to the user. It also exits the loop if the connection is broken, this is communicated via the request context.
func BuildingSSE(c echo.Context) error {
c.Response().Header().Set(echo.HeaderCacheControl, "no-cache")
c.Response().Header().Set(echo.HeaderConnection, "keep-alive")
c.Response().Header().Set(echo.HeaderContentType, "text/event-stream")
queue := queue.GetBuildQueue()
ctx := c.Request().Context()
for {
select {
case result := <-queue.BuildLogsChannel:
fmt.Fprint(c.Response(), buildSSE("message", result))
c.Response().Flush()
case <-ctx.Done():
return nil
}
}
}
func buildSSE(event, context string) string {
var result string
if len(event) != 0 {
result = result + "event: " + event + "\n"
}
if len(context) != 0 {
result = result + "data: " + context + "\n"
}
result = result + "\n"
return result
}