Hacker News new | comments | show | ask | jobs | submitlogin
GitLab 13.0 (about.gitlab.com)
282 points by marksamman 13 days ago | hide | past | web | 83 comments | favorite

Hi! There are a lot of new features with this update, so here are a few highlights:

-improved version of Gitaly service called Gitaly Cluster for high availability git storage [1]

-simplified deployment of GitLab to Amazon ECS [2]

-added epic hierarchy on roadmaps (+ other improvements to our epic and milestone features) [3]

[1] https://about.gitlab.com/releases/2020/05/22/gitlab-13-0-rel...

[2] https://about.gitlab.com/releases/2020/05/22/gitlab-13-0-rel...

[3] https://about.gitlab.com/releases/2020/05/22/gitlab-13-0-rel...

-Reduced memory consumption of GitLab with Puma [1]

Puma reduces the memory footprint of GitLab by about 40% compared to Unicorn.

Considering one of the most frequent complains about Gitlab is memory usage. 40% is huge.

[1] https://about.gitlab.com/releases/2020/05/22/gitlab-13-0-rel...

It is! We're super excited about it. We've been running it on GitLab.com for some time now, and it's been great to see.

According to the blog post, the labels are saying that this isn't available on any versions (free, bronze, silver, gold) of gitlab.com?

We didn't mark it as a feature of GitLab.com since the company decides if it is used, we also pay for the hosting (memory) cost, and there should be no end-user difference between Puma and Unicorn. But I agree that there is an argument for marking it as available there since all of GitLab.com runs on it.

You're welcome! The team worked hard to move it down in one release!

oops misspoke and now I can't edit, I meant deployment of applications to AWS (hopefully you knew what I meant!)

I really like GitLab, especially the Kubernetes integration. Its nice to see some Terraform integration as well. I do hope some of the security scanning goes to lower tiers, that can be very helpful.

The biggest issue with GitLab I find is their stubbornness to have different prices for reporters vs developers[0][1]. It really makes no sense other than they just want more money (even though they are losing a bunch because lots of companies refuse to do it).



Are you suggesting reporter users should be more or less expensive than developer users?

If you've got some users at a higher tier how are they going to interact with other users within a group that isn't on that tier?

Features recently moved down to silver from gold enabled us to leave jira which has been great. Epics + roadmaps

Reporters should absolutely be less. They are there to either view code or view & submit issues.

There are a lot of companies with 10 developers but 100 reporters. Why should those two groups of people cost the same? The interaction is easy - GitLab already works on the permissions with the different user types. And calculating cost is just as easy - how many reporters vs how many other types.

It does get slightly more complicated since those are project level features. I don't know how many users exist that don't have Developer access on at least one repository.

Enforcement of that seems less than ideal. You would have to have GitLab check the number of Reporters and Developers any time someone gets Developer access to a repo. Which means that the repo owner gets the error when you run out of licenses, which is less than ideal. As opposed to a user limit, where presumably whatever team owns Gitlab gets the error when they add the user.

110 users account is going to be one of the enterprise-y/gold/ultimate tier plans. They're going to get the enterprise-y price and subsidize the free and less-expensive tiers.

I would guess that most startups or small devshops using gitlab are going to have 5-6 devs to a maybe 1 reporter user.

Edit - I'm not disagreeing with you that a reporter user needs less features and in theory has a lower cost to gitlab. I think the target customer for the gold/ultimate tier that has reporter users won't blink at paying for it.

Hi the PM responsible for Terraform and Kubernetes integrations here. We would like to improve both our K8s and Tf support in the next months, and would be great to hear about your use-cases a bit more. What do you like or dislike wrt GitLab Kubernetes integration? Would you be open to a user interview?

Exporting environment variables via artifacts sounds like a strange decision, but I guess its one way to do it. After years of waiting for jobs to communicate other then relying on file artifacts this is somewhat disappointing.

I would like to see parent/child pipelines recive some love as currently it does work but quirks are all around. For example, its not easy (or sometimes even possible) to pass pipeline variables from parent to child, pipline UI behaves differently when being part of relation (and many times unusuable or shows a wrong thing), not being able to repeat manual jobs with the same var initially passed, not even being able to run it again with any other var unless you delete all previous executions, strange limitations on masked secret vars, $ in your password will get evaluated as variable etc ...

Seems like an afterthought, rather then a carefully designed feature set. Too organic for my taste (I guess Conway's law is full blown there).

While Gitlab CI is getting better (or more capable, rather then better) then before in every release, it does seem bloated, overly complex, full of surprises, not reproducible locally, very slow (caching is ridiculous feature that makes job often run longer then without it) and with strange design that doesn't let me view my build log full screen or collect entire pipeline output easily.

Generally, Windows is also lagging behind (I can't use pwsh as runner shell today after 4 years in existence?). The worst recent offender: you display 'fail' on every job in color: "WARNING: Failed to load system CertPool: crypto/x509: system root pool is not available on Windows". This trips everybody that job has failed when it didn't.

I think Gitlab guys need to start taking CI/CD/runner more seriously or at least bring some fresh mind to it. After all, this is one of the major reasons for many people to use it.

Announced today as part of our 13.0 Release is the ability to pass variables/data between jobs without using artifacts - https://about.gitlab.com/releases/2020/05/22/gitlab-13-0-rel....

For a list of planned follow-up issues as we iterating on the parent-child pipeline MVC, please check out our epic for this feature: https://gitlab.com/groups/gitlab-org/-/epics/2750. We welcome your comments and up-votes on the issues that matter most to you.

EDIT: Sorry; misread your first sentence; seems you saw the announcement that there is an alternative to passing variable via artifacts. Please consider opening an issue (https://gitlab.com/gitlab-org/gitlab/-/issues/new) to let us know how you would like jobs to communication.

They still haven't fixed process killing (that is, not sending SIGKILL immediately but sending SIGTERM so that child processes get cleaned up and the runner doesn't get stuck waiting for child processes to exit that don't know they should exit).

For being THE feature that made gitlab big the whole CI area seems to be getting away from them

The caching is so broken it's barely usable. You can't easily exclude files. The cache can't be updated incrementally. It's often slower than downloading all dependencies on every build.

Instead of caching node_modules, cache node_modules.tar.szt

Then wrap your job in a untar/tar:

    tar --use-compress-program zstd -xf node_modules.tar.zst || true
    tar --use-compress-program zstd -cf node_modules.tar.zst node_modules
(and you can delete things you don't need if you want)

That's weird won't these all be docker layers somewhere in the guts? So it's already compressed.

The caching is not broken and it's completely usable. It sounds like you've got a weird specific use case that doesn't benefit from a general solution.

The caching has worked great for me for Java / JVM languages, and decently for Go (but a proxy is usually better).

It is unusable for NPM or composer, or the version of yarn we use (maybe newer ones too but I know they were trying to improve their internal cache a while back). Anything with a ton of small files or entire git histories. It’s awful, and it’s not weird or specific.

It looks like it's now impossible to login to gitlab.com web interface via Tor. I only post it here because it was a recent sudden silent change that made gitlab.com unavailable to some, and there does not seem to be a response.

Can you try again - we are working on changes to our Cloudflare front end (https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issu...).

Works fine now, thanks! Sorry I made noise like this, I tried to find alternative channels of communication that presumably should work when someone cannot login via web for any reason, and failed.

Terraform integration, and improved DAST scanner that will read in your OpenAPI doc and generate a dynamic scan, for my company which is striving for HiTrust certification, Gitlab has made my life easier. We switched last year and it keeps paying dividends for our security programs.

I hope they begin to focus on more production stability, this week alone we've had 3 disruptions due to issues (although one looked like Google Cloud outage).

What keeps me from using gitlab.com is that they flat out reject a signup with an email from my email provider.

Anybody else having that problem?

I wonder if they have a whitelist and only accept users who use the big boys like gmail etc. Or if they for some reason have totally legit email providers on their blacklist.

Have you tried signing up with a gmail then changing you primary email in profile?

Edit: @gmx. address by any chance?

GitLab Developer Evangelist here.

It could be that there is a legit email provider that got on the banned list inadvertently. Would you mind sharing the provider?

If you want to not do it in public you can DM me at olearycrew on Twitter or email boleary [at] gitlab.com

Hi! Yandex. Am I really the first to tell you? Yandex has a 30% market share in Russia.

No idea! But I'm going to find out :)

So we don’t think we’re explicitly blocking that and have users with that domain.

Could you DM / email me the details and a screen shot? I’ll try and help as best I can.

Have you tried signing up with an @yandex.com email? The signup page just reloads with this message:

1 error prohibited this user from being saved: Email is not from an allowed domain.

You don't need to have a Yandex account for that. Just put in whatever@yandex.com as your email and you get the above message.

I really hope some of the the project management tools and scanning features will trickle down to the cheaper/free tiers some time.

I can understand that not everything has to be free but the current feature split of project management and scanning features among the tiers feels a bit haphazard with some crucial things at the highest prices, making it hard to justify the in-between tiers and impossible to buy the highest.

Epics and roadmaps were moved down to less expensive tiers recently.

Relevant convos https://gitlab.com/groups/gitlab-org/-/epics/1887

I hope the full Jira integration comes down to a level that would at least compare with BitBucket pricing. Right now it's more than double.

My company would switch if the cost was the same or close, but I can't justify paying double even with all the extra features in gitlab.

Its not likely. They are insistent on the pricing model, even if it does not make sense. See my post above.

Putting Epic and Issue tools to the highest pricing is terrible for lots of people. We forced to pay features that we will never use.

I wish they add some kind of addon system for these features.

The Terraform bits are incredible, GitLab has almost made Terraform Enterprise irrelevant (aside from sentinel, for now). Will we see Vault management soon too?

GitLab Developer Evangelist here.

Short answer: yes.

Long answer: Also yes, but with a link to the relevant epic: https://gitlab.com/groups/gitlab-org/-/epics/2875

I love this GitLab's transparency.

Most companies simply ignore questions like these because letting users know the roadmap is somehow scary.

+1 It is awesome to see a product manager, dev, and designer actively collaborating and commenting on a feature you're waiting for.

Since Gitlab is listening: users sometimes attach files to issues on my project and then regret it. There doesn’t seem to be any way to permanently delete such an upload. Sometimes these files have passwords or their employers’ secrets. I asked for help in the forums but got no response. Help?

Hi! Thanks for reaching out! There's an open issue related to your question:https://gitlab.com/gitlab-org/gitlab/-/issues/16229

You can't delete attached files if some of them were uploaded by mistake - It's better to use the issue tracker for a feature request.

The issue tracker I think is a better place for such a feature request.

Last time I worked on the attachment code, there wasn't a persistent database relationship for uploads and notes. Which would make this feature hard to implement, it would be a great feature though!

As said before it's frustrating that you can't browse source code or view replies to issues without javascript enabled. This is very basic functionality. I don't expect advanced stuff to work without javascript, but simple operations definitely should. Every other Git repository manager I know of, got this right. Just my two cents.

I was so happy when I saw the attention paid to deployment on ECS that I thought maybe they finally (years later) put some more attention into GCP[1], but it appears not. Hopefully in some time in the next few months!

[1] https://gitlab.com/groups/gitlab-org/charts/-/epics/4

Hi Dimitri. We are in the process of deprecating the current GKE listing which Google will be replacing with a single click to deploy listing, currently targeted to go live before the end of June. Thanks for your interest and patience. For more details on other GCP plans see https://gitlab.com/groups/gitlab-com/-/epics/322. Larissa Lane, Distribution Product Manager

Awesome! I'll look forward to that release!

I love gitlab but I run into two issues constantly:

1. Gitlab CI with DinD kills the docker cache on every build. For a large monorepo this is a huge pain. Really needs to be addressed with some host volume mount of the docker daemon layer cache per concurrent job.

2. No say to specify CPU/disk/memory usage of a CI stage. I have a large number of builds that need ~512mb of ram per build and another build that needs 4GB of ram. Because of this I need to make sure every instance of the runner has exactly 4GB of ram available. This is a large waste of resources.

Gitlab is a fantastic product and I'm super happy for migrating multiple companies to it and that they keep innovating.

On your second point, you can achieve this with tags. See here: https://docs.gitlab.com/ee/ci/yaml/#configuration-parameters and https://docs.gitlab.com/ee/ci/runners/#using-tags

On the runner side you can tell it which tags to accept, and on the .gitlab-ci.yml side you can tag your jobs.

We mainly use this to make most jobs run on AWS EC2 spot instances, but a few use on-demand instead. We also use it to give a couple of jobs a larger instance size.

Can you do that within the same repo? I mainly work in a large monorepo.

Yep - I use a monorepo too. You just set tags on your runners, and tags on your CI jobs, and Gitlab’s scheduler will assign the jobs to correct runners for you. It’s worked perfectly for us.

Already bumped our postgres RDS to 11 for this. Upgrade time! [edit] Upgrade went well

Is PG 11 required for this new release? Does PG 10 work?

Currently probably works, but they want to depend on the ability to use features supported only in pg11+ to be able to get better performance.


Yes and don’t know. Why risk it.

I am running Gitlab in Docker (using official images), how would you go with upgrading from version 12.10 without losing any data? I am using volumes for Gitlab data.

Please read through the upgrade notes for 13.0 as there are some important changes like PG11 being a minimum requirement.

As for the actual upgrade, you should be able to follow the standard process outlined here since you are persisting your data outside the container: https://docs.gitlab.com/omnibus/docker/#update

Updated without any issues, thanks.

The thing that is really missing in gitlab CI is the impossibility to define a matrix for builds.

We have to do the same actions on different environments and it is very troublesome.

Agreed! That's why we are working on it this release - https://gitlab.com/gitlab-org/gitlab/-/issues/15356

I would love to see a situation where we could predefine a matrix and then toggle different products on or off.

For example, we have:

Models A, B, C, D Builds Debug, Release

Normally this would be

A Debug A Release B Debug B Release

And so on, and so on. In our case, where we have about 90 different models, it would be extremely useful to be able to configure, via a GUI of some kind, which of those intersections we want, rather than rebuilding the entire set for a configuration option which only affects one or two models.

At this point, the only option available to us is the Jenkins Matrix Build Plugin, which is awful in a few fairly frustrating ways, but is also the only thing that does what we want.

Examples: configuring an SCM in a matrix build job results in one svn checkout for the matrix build job and one checkout each for the cartesian product, and then, since we have a separate child job for doing builds, one checkout each for the child jobs. We don't want the matrix job to do any checkout, but if you configure it with a Git or SVN URL then it will do it regardless.

You can get quite close with includes and yaml anchors

I have less than 20 environments to configure and it is already extremely messy in gitlab CI.

So messy that at the moment it is just a little toy and it will never actually work in production.

With 90? No there is no way you can create a reliable gitlab CI with so many environments.

Beside there are several bugs in gitlab CI and support is basically useless even for an organisation like ours where I believe we are one of their largest costumers. Our tickets (or at least the one I create) are simply ignored.

Oh wow! This is very much the kind of feedback we are looking for as we redesign our environments views. Can you share your issues here?

Nothing against gitlab as I use and pushed for it in my company, but I feel it's heading towards Azure Devops(this might be entirely baseless). By that I mean it's geared more towards enterprise teams and business than startups looking to build stuff and deploy. I say this off the complexity of their UI, their feature set and what they seem to prioritize.

Most businesses prioritize what people pay them for (or they run out of money). Gitlab is accumulating enterprise features because enterprises are paying them for those features. Most places I've worked have used GitHub and it's good enough. Some also had Gitlab for "secret" stuff but if you trust GitHub with your main code then you already trust them with a lot so might as well just use them for all code hosting.

Too bad for me my company's IT team is going to force my group off of gitlab and on to azure devops.

Does gitlab offer a stripped down version with only vanila 'git' ?

You might want to look at gitea: https://gitea.io/en-us/ it is a pretty minimal git web application.

I wouldn't call it minimal. It's got enough for many workloads.

GitLab Developer Evangelist here.

You can install GitLab and only _use_ it for 'git' (source control management). But the install is the same - GitLab is a single application

You only need an SSH server for that.

You can combine ssh with multiple users to provide repo-level permissions (not branch level). You use standard linux permissions to enable deploy keys. You would also use git-shell on the read only ssh accounts for the deploy-keys. Without writing a new shell you wouldn't be able to do branch level protection.

Do you have a reference for such a barebones git setup anywhere? I’d love to get rid of Gitea currently in use for the team and replace it with simple Git + SSH. We don’t need branch level access control.

Last I checked, I gave up after try to set up keys for each user into a ‘git’ user in the VM running git. Any guidance here would be of great help.

You could also look into Sourcehut depending on what your needs are.


Issue edit history still has not enabled


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact