Range-diff rd-121
- title
- feat(pgs): lru cache for object info and special files
- description
-
Patch changed - old #1
7338b44- new #1
26daea4
- title
- chore(pgs): use http cache clear event to rm lru cache for special files
- description
-
Patch added - old #0
(none)- new #2
b004b64
- title
- refactor(pgs): store lru cache on web router
- description
-
Patch added - old #0
(none)- new #3
59f5618
1: 7338b44 ! 1: 26daea4 feat(pgs): lru cache for object info and special files
- title changed
-
- feat: allow config `desc` to add a description box to index page+ feat(pgs): lru cache for object info and special files
- message changed
-
- refactor: consolidate docs page onto index This change will allow maintainers to add a description box to the top of their git-pr instance as an introduction to their service.
+
old
old:
cfg.go
new:cfg.go
CreateRepo string `koanf:"create_repo"` Theme string `koanf:"theme"` TimeFormat string `koanf:"time_format"` + Desc string `koanf:"desc"` Logger *slog.Logger } "theme", out.Theme, "time_format", out.TimeFormat, "create_repo", out.CreateRepo, + "desc", out.Desc, ) for _, pubkey := range out.AdminsStr {
new
old
old:
git-pr.toml
new:git-pr.toml
# admin: only admins # user: admins and users create_repo = "user" +desc = ""
new
old
old:
static/git-pr.css
new:static/git-pr.css
} pre { - font-size: 1rem; + padding: var(--grid-height); } table, tr {
new
old
old:
tmpl/docs.html
-{{template "base" .}} - -{{define "title"}}git-pr{{end}} - -{{define "meta"}} -<link rel="alternate" type="application/atom+xml" - title="RSS feed for git collaboration server" - href="/rss" /> -{{end}} - -{{define "body"}} -<header class="group"> - <h1 class="text-2xl"><a href="/">DASHBOARD</a> / docs</h1> - <div> - <span>A pastebin supercharged for git collaboration</span> · - <a href="https://github.com/picosh/git-pr">github</a> · - <a href="https://youtu.be/d28Dih-BBUw">demo video</a> - </div> - <pre class="m-0">ssh {{.MetaData.URL}} help</pre> -</header> - -<main class="group"> - <details> - <summary>Intro</summary> - - <div> - <p> - We are trying to build the simplest git collaboration tool. The goal is to make - self-hosting as simple as running an SSH server -- all without - sacrificing external collaborators time and energy. - </p> - - <blockquote> - <code>git format-patch</code> isn't the problem and pull requests aren't the solution. - </blockquote> - - <p> - We are combining mailing list and pull request workflows. In order to build the - simplest collaboration tool, we needed something as simple as generating patches - but the ease-of-use of pull requests. - </p> - - <p> - The goal is not to create another code forge, the goal is to create a very - simple self-hosted git solution with the ability to collaborate with external - contributors. All the code owner needs to setup a running git server: - </p> - - <ul><li>A single golang binary</li></ul> - - <div> - All an external contributor needs is: - </div> - - <ul> - <li>An SSH keypair</li> - <li>An SSH client</li> - </ul> - - <div>Then everyone subscribes to our RSS feeds to receive updates to patch requests.</div> - - <h2 class="text-xl">the problem</h2> - - <p> - Email is great as a decentralized system to send and receive changes (patchsets) - to a git repo. However, onboarding a new user to a mailing list, properly - setting up their email client, and then finally submitting the code contribution - is enough to make many developers give up. Further, because we are leveraging - the email protocol for collaboration, we are limited by its feature-set. For - example, it is not possible to make edits to emails, everyone has a different - client, those clients have different limitations around plain text email and - downloading patches from it. - </p> - - <p> - Github pull requests are easy to use, easy to edit, and easy to manage. The - downside is it forces the user to be inside their website to perform reviews. - For quick changes, this is great, but when you start reading code within a web - browser, there are quite a few downsides. At a certain point, it makes more - sense to review code inside your local development environment, IDE, etc. There - are tools and plugins that allow users to review PRs inside their IDE, but it - requires a herculean effort to make it usable. - </p> - - <p> - Further, self-hosted solutions that mimic a pull request require a lot of - infrastructure in order to manage it. A database, a web site connected to git, - admin management, and services to manage it all. Another big point of friction: - before an external user submits a code change, they first need to create an - account and then login. This adds quite a bit of friction for a self-hosted - solution, not only for an external contributor, but also for the code owner who - has to provision the infra. Often times they also have to fork the repo within - the code forge before submitting a PR. Then they never make a contribution ever - again and keep a forked repo around forever. That seems silly. - </p> - - <h2 class="text-xl">introducing patch requests (PR)</h2> - - <p> - Instead, we want to create a self-hosted git "server" that can handle sending - and receiving patches without the cumbersome nature of setting up email or the - limitations imposed by the email protocol. Further, we want the primary workflow - to surround the local development environment. Github is bringing the IDE to the - browser in order to support their workflow, we want to flip that idea on its - head by making code reviews a first-class citizen inside your local development - environment. - </p> - - <p> - We see this as a hybrid between the github workflow of a pull request and - sending and receiving patches over email. - </p> - - <p> - The basic idea is to leverage an SSH app to handle most of the interaction - between contributor and owner of a project. Everything can be done completely - within the terminal, in a way that is ergonomic and fully featured. - </p> - - <p> - Notifications would happen with RSS and all state mutations would result in the - generation of static web assets so it can all be hosted using a simple file web - server. - </p> - - <h3 class="text-lg">format-patch workflow</h3> - - <p> - The fundamental collaboration tool here is <code>format-patch</code>. Whether you a - submitting code changes or you are reviewing code changes, it all happens in - code. Both contributor and owner are simply creating new commits and generating - patches on top of each other. This obviates the need to have a web viewer where - the reviewing can "comment" on a line of code block. There's no need, apply the - contributor's patches, write comments or code changes, generate a new patch, - send the patch to the git server as a "review." This flow also works the exact - same if two users are collaborating on a set of changes. - </p> - - <p> - This also solves the problem of sending multiple patchsets for the same code - change. There's a single, central Patch Request where all changes and - collaboration happens. - </p> - - <p> - We could figure out a way to leverage <code>git notes</code> for reviews / comments, but - honestly, that solution feels brutal and outside the comfort level of most git - users. Just send reviews as code and write comments in the programming language - you are using. It's the job of the contributor to "address" those comments and - then remove them in subsequent patches. This is the forcing function to address - all comments: the patch won't be merged if there are comment unaddressed in - code; they cannot be ignored or else they will be upstreamed erroneously. - </p> - </div> - </details> - - <details> - <summary>How do Patch Requests work?</summary> - <div> - Patch requests (PR) are the simplest way to submit, review, and accept changes to your git repository. - Here's how it works: - </div> - - <ol> - <li>External contributor clones repo (<code>git-clone</code>)</li> - <li>External contributor makes a code change (<code>git-add</code> & <code>git-commit</code>)</li> - <li>External contributor generates patches (<code>git-format-patch</code>)</li> - <li>External contributor submits a PR to SSH server</li> - <li>Owner receives RSS notification that there's a new PR</li> - <li>Owner applies patches locally (<code>git-am</code>) from SSH server</li> - <li>Owner makes suggestions in code! (<code>git-add</code> & <code>git-commit</code>)</li> - <li>Owner submits review by piping patch to SSH server (<code>git-format-patch</code>)</li> - <li>External contributor receives RSS notification of the PR review</li> - <li>External contributor re-applies patches (<code>git-am</code>)</li> - <li>External contributor reviews and removes comments in code!</li> - <li>External contributor submits another patch (<code>git-format-patch</code>)</li> - <li>Owner applies patches locally (<code>git-am</code>)</li> - <li>Owner marks PR as accepted and pushes code to main (<code>git-push</code>)</li> - </ol> - - <div>Example commands</div> - - <pre># Owner hosts repo `test.git` using github - -# Contributor clones repo -git clone git@github.com:picosh/test.git - -# Contributor wants to make a change -# Contributor makes changes via commits -git add -A && git commit -m "fix: some bugs" - -# Contributor runs: -git format-patch origin/main --stdout | ssh {{.MetaData.URL}} pr create test -# > Patch Request has been created (ID: 1) - -# Owner can checkout patch: -ssh {{.MetaData.URL}} pr print 1 | git am -3 -# Owner can comment (IN CODE), commit, then send another format-patch -# on top of the PR: -git format-patch origin/main --stdout | ssh {{.MetaData.URL}} pr add --review 1 -# UI clearly marks patch as a review - -# Contributor can checkout reviews -ssh {{.MetaData.URL}} pr print 1 | git am -3 - -# Owner can reject a pr: -ssh {{.MetaData.URL}} pr close 1 - -# Owner can accept a pr: -ssh {{.MetaData.URL}} pr accept 1 - -# Owner can prep PR for upstream: -git rebase -i origin/main - -# Then push to upstream -git push origin main - -# Done! -</pre> - </details> - - <details> - <summary>What's a repo?</summary> - - <div> - A repo is designed to mimick a git repo, but it's really just a tag. When - submitting a patch request, if the user does not provide a repo name then - the default "bin" will be selected. When a user creates a repo they become - the repo owner and have special privileges. - </div> - </details> - - <details> - <summary>Can anyone use this service?</summary> - - <div> - This service is a public space for anyone to freely create "repos" and - collaborate with users. Anyone is able to add patchsets to a patch request - and anyone is able to review any other patch requests, regardless of repo. - </div> - </details> - - <details> - <summary>First time user experience</summary> - - <div> - Using this service for the first time? Creating a patch request is simple: - </div> - - <pre>git format-patch main --stdout | ssh pr.pico.sh pr create {repo}</pre> - - <div>When running that command we will automatically create a user and a repo if one doesn't exist.</div> - - <div>Want to submit a v2 of the patch request?</div> - - <pre>git format-patch main --stdout | ssh pr.pico.sh pr add {prID}</pre> - </details> - - <details> - <summary>How do I receive notifications?</summary> - - <div> - We have different RSS feeds depending on the use case. This is how you - can receive notifications for when someone submits or reviews patch requests. - </div> - </details> - - <details> - <summary>Alternative git collaboration systems</summary> - - <div> - <ol> - <li><a href="https://gerritcodereview.com/">Gerrit</a></li> - <li><a href="https://we.phorge.it/">Phorge</a> (fork of Phabricator)</li> - <li><a href="https://graphite.dev/docs/cli-quick-start">Graphite</a></li> - <li><a href="https://codeapprove.com/">CodeApprove</a></li> - <li><a href="https://reviewable.io/">Reviewable</a></li> - </ol> - </div> - </details> -</main> - -{{end}}
new
old
old:
tmpl/index.html
new:tmpl/index.html
{{define "body"}} <header class="group"> - <h1 class="text-2xl">patchbin</h1> + <h1 class="text-2xl">git-pr</h1> <div> <span>A pastebin supercharged for git collaboration</span> · - <a href="/docs">docs</a> + <a href="https://github.com/picosh/git-pr">github</a> · + <a href="https://youtu.be/d28Dih-BBUw">demo video</a> </div> + {{if .MetaData.Desc}} <div class="box-sm"> - <div> - Welcome to <a href="https://pico.sh">pico's</a> managed patchbin service! - This is a <strong>public</strong> service that is free to anyone who wants - to collaborate on git patches. The idea is simple: submit a patchset to - our service and let anyone collaborate on it by submitting follow-up patchsets. - Using this service for the first time? Creating a patch request is simple: - </div> + <div>{{.MetaData.Desc}}</div> + </div> + {{end}} - <pre class="text-sm">git format-patch main --stdout | ssh pr.pico.sh pr create {repo}</pre> + <details> + <summary>Intro</summary> <div> - When running that command we will automatically create a user and a repo - if one doesn't exist. Once the patches have been submitted you'll receive - a link that you can send to a reviewer. Anyone can review patch requests. - Want to submit a v2 of the patch request? + <p> + We are trying to build the simplest git collaboration tool. The goal is to make + self-hosting as simple as running an SSH server -- all without + sacrificing external collaborators time and energy. + </p> + + <blockquote> + <code>git format-patch</code> isn't the problem and pull requests aren't the solution. + </blockquote> + + <p> + We are combining mailing list and pull request workflows. In order to build the + simplest collaboration tool, we needed something as simple as generating patches + but the ease-of-use of pull requests. + </p> + + <p> + The goal is not to create another code forge, the goal is to create a very + simple self-hosted git solution with the ability to collaborate with external + contributors. All the code owner needs to setup a running git server: + </p> + + <ul><li>A single golang binary</li></ul> + + <div> + All an external contributor needs is: + </div> + + <ul> + <li>An SSH keypair</li> + <li>An SSH client</li> + </ul> + + <p>Then everyone subscribes to our RSS feeds to receive updates to patch requests.</p> + + <h2 class="text-xl">the problem</h2> + + <p> + Email is great as a decentralized system to send and receive changes (patchsets) + to a git repo. However, onboarding a new user to a mailing list, properly + setting up their email client, and then finally submitting the code contribution + is enough to make many developers give up. Further, because we are leveraging + the email protocol for collaboration, we are limited by its feature-set. For + example, it is not possible to make edits to emails, everyone has a different + client, those clients have different limitations around plain text email and + downloading patches from it. + </p> + + <p> + Github pull requests are easy to use, easy to edit, and easy to manage. The + downside is it forces the user to be inside their website to perform reviews. + For quick changes, this is great, but when you start reading code within a web + browser, there are quite a few downsides. At a certain point, it makes more + sense to review code inside your local development environment, IDE, etc. There + are tools and plugins that allow users to review PRs inside their IDE, but it + requires a herculean effort to make it usable. + </p> + + <p> + Further, self-hosted solutions that mimic a pull request require a lot of + infrastructure in order to manage it. A database, a web site connected to git, + admin management, and services to manage it all. Another big point of friction: + before an external user submits a code change, they first need to create an + account and then login. This adds quite a bit of friction for a self-hosted + solution, not only for an external contributor, but also for the code owner who + has to provision the infra. Often times they also have to fork the repo within + the code forge before submitting a PR. Then they never make a contribution ever + again and keep a forked repo around forever. That seems silly. + </p> + + <h2 class="text-xl">introducing patch requests (PR)</h2> + + <p> + Instead, we want to create a self-hosted git "server" that can handle sending + and receiving patches without the cumbersome nature of setting up email or the + limitations imposed by the email protocol. Further, we want the primary workflow + to surround the local development environment. Github is bringing the IDE to the + browser in order to support their workflow, we want to flip that idea on its + head by making code reviews a first-class citizen inside your local development + environment. + </p> + + <p> + We see this as a hybrid between the github workflow of a pull request and + sending and receiving patches over email. + </p> + + <p> + The basic idea is to leverage an SSH app to handle most of the interaction + between contributor and owner of a project. Everything can be done completely + within the terminal, in a way that is ergonomic and fully featured. + </p> + + <p> + Notifications would happen with RSS and all state mutations would result in the + generation of static web assets so it can all be hosted using a simple file web + server. + </p> + + <h3 class="text-lg">format-patch workflow</h3> + + <p> + The fundamental collaboration tool here is <code>format-patch</code>. Whether you a + submitting code changes or you are reviewing code changes, it all happens in + code. Both contributor and owner are simply creating new commits and generating + patches on top of each other. This obviates the need to have a web viewer where + the reviewing can "comment" on a line of code block. There's no need, apply the + contributor's patches, write comments or code changes, generate a new patch, + send the patch to the git server as a "review." This flow also works the exact + same if two users are collaborating on a set of changes. + </p> + + <p> + This also solves the problem of sending multiple patchsets for the same code + change. There's a single, central Patch Request where all changes and + collaboration happens. + </p> + + <p> + We could figure out a way to leverage <code>git notes</code> for reviews / comments, but + honestly, that solution feels brutal and outside the comfort level of most git + users. Just send reviews as code and write comments in the programming language + you are using. It's the job of the contributor to "address" those comments and + then remove them in subsequent patches. This is the forcing function to address + all comments: the patch won't be merged if there are comment unaddressed in + code; they cannot be ignored or else they will be upstreamed erroneously. + </p> </div> + </details> + + <details> + <summary>How do Patch Requests work?</summary> + <div> + Patch requests (PR) are the simplest way to submit, review, and accept changes to your git repository. + Here's how it works: + </div> + + <ol> + <li>External contributor clones repo (<code>git-clone</code>)</li> + <li>External contributor makes a code change (<code>git-add</code> & <code>git-commit</code>)</li> + <li>External contributor generates patches (<code>git-format-patch</code>)</li> + <li>External contributor submits a PR to SSH server</li> + <li>Owner receives RSS notification that there's a new PR</li> + <li>Owner applies patches locally (<code>git-am</code>) from SSH server</li> + <li>Owner makes suggestions in code! (<code>git-add</code> & <code>git-commit</code>)</li> + <li>Owner submits review by piping patch to SSH server (<code>git-format-patch</code>)</li> + <li>External contributor receives RSS notification of the PR review</li> + <li>External contributor re-applies patches (<code>git-am</code>)</li> + <li>External contributor reviews and removes comments in code!</li> + <li>External contributor submits another patch (<code>git-format-patch</code>)</li> + <li>Owner applies patches locally (<code>git-am</code>)</li> + <li>Owner marks PR as accepted and pushes code to main (<code>git-push</code>)</li> + </ol> + + <div>Example commands</div> + + <pre># Owner hosts repo `test.git` using github + +# Contributor clones repo +git clone git@github.com:picosh/test.git + +# Contributor wants to make a change +# Contributor makes changes via commits +git add -A && git commit -m "fix: some bugs" + +# Contributor runs: +git format-patch origin/main --stdout | ssh {{.MetaData.URL}} pr create test +# > Patch Request has been created (ID: 1) + +# Owner can checkout patch: +ssh {{.MetaData.URL}} pr print 1 | git am -3 + +# Owner can comment (IN CODE), commit, then send another format-patch +# on top of the PR: +git format-patch origin/main --stdout | ssh {{.MetaData.URL}} pr add --review 1 +# UI clearly marks patch as a review - <pre class="text-sm">git format-patch main --stdout | ssh pr.pico.sh pr add {prID}</pre> +# Contributor can checkout reviews +ssh {{.MetaData.URL}} print pr-1 | git am -3 + +# Owner can reject a pr: +ssh {{.MetaData.URL}} pr close 1 + +# Owner can accept a pr: +ssh {{.MetaData.URL}} pr accept 1 + +# Owner can prep PR for upstream: +git rebase -i origin/main + +# Then push to upstream +git push origin main + +# Done! +</pre> + </details> + + <details> + <summary>First time user?</summary> <div> - Downloading a patchset is easy as well: + Using this service for the first time? Creating a patch request is simple: </div> - <pre class="text-sm">ssh pr.pico.sh print pr-{prID}</pre> - </div> + <pre>git format-patch main --stdout | ssh {{.MetaData.URL}} pr create {repo}</pre> + + <div>When running that command we will automatically create a user and a repo if one doesn't exist.</div> + + <div>Want to submit a v2 of the patch request?</div> + + <pre>git format-patch main --stdout | ssh {{.MetaData.URL}} pr add {prID}</pre> + </details> </header> <main> </main> <footer class="mt"> - <div><a href="/rss">rss</a></div> + <a href="/rss">rss</a> </footer> {{end}}
new
old
old:
tmpl/pr-header.html
new:tmpl/pr-header.html
<details> <summary>Help</summary> <div class="group"> - <pre class="m-0"># checkout latest patchset -ssh {{.MetaData.URL}} print pr-{{.Pr.ID}} | git am -3</pre> - <pre class="m-0"># checkout any patchset in a patch request -ssh {{.MetaData.URL}} print ps-X | git am -3</pre> - <pre class="m-0"># add changes to patch request -git format-patch {{.Branch}} --stdout | ssh {{.MetaData.URL}} pr add {{.Pr.ID}}</pre> - <pre class="m-0"># add review to patch request -git format-patch {{.Branch}} --stdout | ssh {{.MetaData.URL}} pr add --review {{.Pr.ID}}</pre> - <pre class="m-0"># accept PR -ssh {{.MetaData.URL}} pr accept {{.Pr.ID}}</pre> - <pre class="m-0"># close PR -ssh {{.MetaData.URL}} pr close {{.Pr.ID}}</pre> + checkout latest patchset: + <pre class="m-0">ssh {{.MetaData.URL}} print pr-{{.Pr.ID}} | git am -3</pre> + + checkout any patchset in a patch request: + <pre class="m-0">ssh {{.MetaData.URL}} print ps-X | git am -3</pre> + + add changes to patch request: + <pre class="m-0">git format-patch {{.Branch}} --stdout | ssh {{.MetaData.URL}} pr add {{.Pr.ID}}</pre> + + add review to patch request: + <pre class="m-0">git format-patch {{.Branch}} --stdout | ssh {{.MetaData.URL}} pr add --review {{.Pr.ID}}</pre> + + accept PR: + <pre class="m-0">ssh {{.MetaData.URL}} pr accept {{.Pr.ID}}</pre> + + close PR: + <pre class="m-0">ssh {{.MetaData.URL}} pr close {{.Pr.ID}}</pre> </div> </details> </header>
new
old
old:
web.go
new:web.go
return prdata, nil } -func docsHandler(w http.ResponseWriter, r *http.Request) { - web, err := getWebCtx(r) - if err != nil { - w.WriteHeader(http.StatusInternalServerError) - return - } - - w.Header().Set("content-type", "text/html") - tmpl := getTemplate("docs.html") - err = tmpl.ExecuteTemplate(w, "docs.html", BasicData{ - MetaData: MetaData{ - URL: web.Backend.Cfg.Url, - }, - }) - if err != nil { - web.Backend.Logger.Error("cannot execute template", "err", err) - } -} - func indexHandler(w http.ResponseWriter, r *http.Request) { web, err := getWebCtx(r) if err != nil { NumClosed: numClosed, Prs: prdata, MetaData: MetaData{ - URL: web.Backend.Cfg.Url, + URL: web.Backend.Cfg.Url, + Desc: template.HTML(web.Backend.Cfg.Desc), }, }) if err != nil { } type MetaData struct { - URL string + URL string + Desc template.HTML } type PrListData struct { } formatter := formatterHtml.New( formatterHtml.WithLineNumbers(true), + formatterHtml.LineNumbersInTable(true), formatterHtml.WithClasses(true), formatterHtml.WithLinkableLineNumbers(true, "gitpr"), ) http.HandleFunc("GET /r/{user}", ctxMdw(ctx, userDetailHandler)) http.HandleFunc("GET /rss/{user}", ctxMdw(ctx, rssHandler)) http.HandleFunc("GET /rss", ctxMdw(ctx, rssHandler)) - http.HandleFunc("GET /docs", ctxMdw(ctx, docsHandler)) http.HandleFunc("GET /", ctxMdw(ctx, indexHandler)) http.HandleFunc("GET /syntax.css", ctxMdw(ctx, chromaStyleHandler)) embedFS, err := getEmbedFS(embedStaticFS, "static")
new
old
new
old:
go.mod
new:go.mod
github.com/google/uuid v1.6.0 github.com/gorilla/feeds v1.2.0 github.com/gorilla/websocket v1.5.3 + github.com/hashicorp/golang-lru/v2 v2.0.7 github.com/jmoiron/sqlx v1.4.0 github.com/lib/pq v1.10.9 github.com/matryer/is v1.4.1
old
new
old:
go.sum
new:go.sum
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c= github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= +github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= +github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ= github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y= github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
old
new
old:
pkg/apps/pgs/uploader.go
new:pkg/apps/pgs/uploader.go
) specialFileMax := featureFlag.Data.SpecialFileMax - if isSpecialFile(entry) { + if isSpecialFile(entry.Filepath) { sizeRemaining = min(sizeRemaining, specialFileMax) } return str, err } -func isSpecialFile(entry *sendutils.FileEntry) bool { - fname := filepath.Base(entry.Filepath) - return fname == "_headers" || fname == "_redirects" +func isSpecialFile(entry string) bool { + fname := filepath.Base(entry) + return fname == "_headers" || fname == "_redirects" || fname == "_pgs_ignore" } func (h *UploadAssetHandler) Delete(s *pssh.SSHServerConnSession, entry *sendutils.FileEntry) error { } // special files we use for custom routing - if fname == "_pgs_ignore" || fname == "_redirects" || fname == "_headers" { + if isSpecialFile(fname) { return true, nil }
old
new
old:
pkg/apps/pgs/web.go
new:pkg/apps/pgs/web.go
"net/http" "net/url" "os" + "path/filepath" "regexp" "strings" "time" "host", r.Host, ) - if fname == "_headers" || fname == "_redirects" || fname == "_pgs_ignore" { + if isSpecialFile(fname) { logger.Info("special file names are not allowed to be served over http") http.Error(w, "404 not found", http.StatusNotFound) return
old
new
old:
pkg/apps/pgs/web_asset_handler.go
new:pkg/apps/pgs/web_asset_handler.go
"net/http/httputil" _ "net/http/pprof" + "github.com/hashicorp/golang-lru/v2/expirable" + "github.com/picosh/pico/pkg/cache" sst "github.com/picosh/pico/pkg/pobj/storage" "github.com/picosh/pico/pkg/shared/storage" ) +var ( + redirectsCache = expirable.NewLRU[string, []*RedirectRule](2048, nil, cache.CacheTimeout) + headersCache = expirable.NewLRU[string, []*HeaderRule](2048, nil, cache.CacheTimeout) +) + type ApiAssetHandler struct { *WebRouter Logger *slog.Logger func (h *ApiAssetHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { logger := h.Logger var redirects []*RedirectRule - redirectFp, redirectInfo, err := h.Cfg.Storage.GetObject(h.Bucket, filepath.Join(h.ProjectDir, "_redirects")) - if err == nil { - defer redirectFp.Close() - if redirectInfo != nil && redirectInfo.Size > h.Cfg.MaxSpecialFileSize { - errMsg := fmt.Sprintf("_redirects file is too large (%d > %d)", redirectInfo.Size, h.Cfg.MaxSpecialFileSize) - logger.Error(errMsg) - http.Error(w, errMsg, http.StatusInternalServerError) - return - } - buf := new(strings.Builder) - lr := io.LimitReader(redirectFp, h.Cfg.MaxSpecialFileSize) - _, err := io.Copy(buf, lr) - if err != nil { - logger.Error("io copy", "err", err.Error()) - http.Error(w, "cannot read _redirects file", http.StatusInternalServerError) - return - } - redirects, err = parseRedirectText(buf.String()) - if err != nil { - logger.Error("could not parse redirect text", "err", err.Error()) + redirectsCacheKey := filepath.Join(h.Bucket.Name, h.ProjectDir, "_redirects") + if cachedRedirects, found := redirectsCache.Get(redirectsCacheKey); found { + redirects = cachedRedirects + } else { + redirectFp, redirectInfo, err := h.Cfg.Storage.GetObject(h.Bucket, filepath.Join(h.ProjectDir, "_redirects")) + if err == nil { + defer redirectFp.Close() + if redirectInfo != nil && redirectInfo.Size > h.Cfg.MaxSpecialFileSize { + errMsg := fmt.Sprintf("_redirects file is too large (%d > %d)", redirectInfo.Size, h.Cfg.MaxSpecialFileSize) + logger.Error(errMsg) + http.Error(w, errMsg, http.StatusInternalServerError) + return + } + buf := new(strings.Builder) + lr := io.LimitReader(redirectFp, h.Cfg.MaxSpecialFileSize) + _, err := io.Copy(buf, lr) + if err != nil { + logger.Error("io copy", "err", err.Error()) + http.Error(w, "cannot read _redirects file", http.StatusInternalServerError) + return + } + + redirects, err = parseRedirectText(buf.String()) + if err != nil { + logger.Error("could not parse redirect text", "err", err.Error()) + } } + + redirectsCache.Add(redirectsCacheKey, redirects) } routes := calcRoutes(h.ProjectDir, h.Filepath, redirects) defer contents.Close() var headers []*HeaderRule - headersFp, headersInfo, err := h.Cfg.Storage.GetObject(h.Bucket, filepath.Join(h.ProjectDir, "_headers")) - if err == nil { - defer headersFp.Close() - if headersInfo != nil && headersInfo.Size > h.Cfg.MaxSpecialFileSize { - errMsg := fmt.Sprintf("_headers file is too large (%d > %d)", headersInfo.Size, h.Cfg.MaxSpecialFileSize) - logger.Error(errMsg) - http.Error(w, errMsg, http.StatusInternalServerError) - return - } - buf := new(strings.Builder) - lr := io.LimitReader(headersFp, h.Cfg.MaxSpecialFileSize) - _, err := io.Copy(buf, lr) - if err != nil { - logger.Error("io copy", "err", err.Error()) - http.Error(w, "cannot read _headers file", http.StatusInternalServerError) - return - } - headers, err = parseHeaderText(buf.String()) - if err != nil { - logger.Error("could not parse header text", "err", err.Error()) + headersCacheKey := filepath.Join(h.Bucket.Name, h.ProjectDir, "_headers") + if cachedHeaders, found := headersCache.Get(headersCacheKey); found { + headers = cachedHeaders + } else { + headersFp, headersInfo, err := h.Cfg.Storage.GetObject(h.Bucket, filepath.Join(h.ProjectDir, "_headers")) + if err == nil { + defer headersFp.Close() + if headersInfo != nil && headersInfo.Size > h.Cfg.MaxSpecialFileSize { + errMsg := fmt.Sprintf("_headers file is too large (%d > %d)", headersInfo.Size, h.Cfg.MaxSpecialFileSize) + logger.Error(errMsg) + http.Error(w, errMsg, http.StatusInternalServerError) + return + } + buf := new(strings.Builder) + lr := io.LimitReader(headersFp, h.Cfg.MaxSpecialFileSize) + _, err := io.Copy(buf, lr) + if err != nil { + logger.Error("io copy", "err", err.Error()) + http.Error(w, "cannot read _headers file", http.StatusInternalServerError) + return + } + + headers, err = parseHeaderText(buf.String()) + if err != nil { + logger.Error("could not parse header text", "err", err.Error()) + } } + + headersCache.Add(headersCacheKey, headers) } userHeaders := []*HeaderLine{} return } w.WriteHeader(status) - _, err = io.Copy(w, contents) + _, err := io.Copy(w, contents) if err != nil { logger.Error("io copy", "err", err.Error())
old
new
new:
pkg/cache/cache.go
+package cache + +import ( + "log/slog" + "time" + + "github.com/picosh/utils" +) + +var CacheTimeout time.Duration + +func init() { + cacheDuration := utils.GetEnv("STORAGE_MINIO_CACHE_DURATION", "1m") + duration, err := time.ParseDuration(cacheDuration) + if err != nil { + slog.Error("Invalid STORAGE_MINIO_CACHE_DURATION value, using default 1m", "error", err) + duration = 1 * time.Minute + } + + CacheTimeout = duration +}
old
new
old:
pkg/pobj/storage/minio.go
new:pkg/pobj/storage/minio.go
"io" "net/url" "os" + "path/filepath" "strconv" "strings" "time" + "github.com/hashicorp/golang-lru/v2/expirable" "github.com/minio/madmin-go/v3" "github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7/pkg/credentials" + "github.com/picosh/pico/pkg/cache" "github.com/picosh/pico/pkg/send/utils" ) Admin *madmin.AdminClient } -var _ ObjectStorage = &StorageMinio{} -var _ ObjectStorage = (*StorageMinio)(nil) +type CachedBucket struct { + Bucket + Error error +} + +type CachedObjectInfo struct { + *ObjectInfo + Error error +} + +var ( + _ ObjectStorage = &StorageMinio{} + _ ObjectStorage = (*StorageMinio)(nil) + + bucketCache = expirable.NewLRU[string, CachedBucket](2048, nil, cache.CacheTimeout) + objectInfoCache = expirable.NewLRU[string, CachedObjectInfo](2048, nil, cache.CacheTimeout) +) func NewStorageMinio(address, user, pass string) (*StorageMinio, error) { endpoint, err := url.Parse(address) } func (s *StorageMinio) GetBucket(name string) (Bucket, error) { + if cachedBucket, found := bucketCache.Get(name); found { + return cachedBucket.Bucket, cachedBucket.Error + } + bucket := Bucket{ Name: name, } if err == nil { err = errors.New("bucket does not exist") } + + bucketCache.Add(name, CachedBucket{bucket, err}) return bucket, err } + bucketCache.Add(name, CachedBucket{bucket, nil}) + return bucket, nil } ETag: "", } - info, err := s.Client.StatObject(context.Background(), bucket.Name, fpath, minio.StatObjectOptions{}) - if err != nil { - return nil, objInfo, err - } + cacheKey := filepath.Join(bucket.Name, fpath) + + cachedInfo, found := objectInfoCache.Get(cacheKey) + if found { + objInfo = cachedInfo.ObjectInfo - objInfo.LastModified = info.LastModified - objInfo.ETag = info.ETag - objInfo.Metadata = info.Metadata - objInfo.UserMetadata = info.UserMetadata - objInfo.Size = info.Size + if cachedInfo.Error != nil { + return nil, objInfo, cachedInfo.Error + } + } else { + info, err := s.Client.StatObject(context.Background(), bucket.Name, fpath, minio.StatObjectOptions{}) + if err != nil { + objectInfoCache.Add(cacheKey, CachedObjectInfo{objInfo, err}) + return nil, objInfo, err + } + + objInfo.LastModified = info.LastModified + objInfo.ETag = info.ETag + objInfo.Metadata = info.Metadata + objInfo.UserMetadata = info.UserMetadata + objInfo.Size = info.Size + + if mtime, ok := info.UserMetadata["Mtime"]; ok { + mtimeUnix, err := strconv.Atoi(mtime) + if err == nil { + objInfo.LastModified = time.Unix(int64(mtimeUnix), 0) + } + } + + objectInfoCache.Add(cacheKey, CachedObjectInfo{objInfo, nil}) + } obj, err := s.Client.GetObject(context.Background(), bucket.Name, fpath, minio.GetObjectOptions{}) if err != nil { return nil, objInfo, err } - if mtime, ok := info.UserMetadata["Mtime"]; ok { - mtimeUnix, err := strconv.Atoi(mtime) - if err == nil { - objInfo.LastModified = time.Unix(int64(mtimeUnix), 0) - } - } - return obj, objInfo, nil }
old
new
old:
pkg/shared/storage/proxy.go
new:pkg/shared/storage/proxy.go
Ratio *Ratio Rotate int Ext string + NoRaw bool } func (img *ImgProcessOpts) String() string { processOpts = fmt.Sprintf("%s/ext:%s", processOpts, img.Ext) } + if processOpts == "" && !img.NoRaw { + processOpts = fmt.Sprintf("%s/raw:true", processOpts) + } + return processOpts }
-: ------- > 2: b004b64 chore(pgs): use http cache clear event to rm lru cache for special files
-: ------- > 3: 59f5618 refactor(pgs): store lru cache on web router