So sometime ago i added a foreign key column as a rails migration which creates an index on that column as well: through the add_foreign_key method. This index is of the form rails_fk_some_hex Unfortunately i forgot to add a unique constraint with the index which was a necessary use case.
I thought ‘no big deal’ i’ll just add that now, i ran:
and viola it worked on my local machine like a charm. It even removed the previous index on that column that the foreign key migration created.
but running this on staging. I had 2 indexes, 1 unique and 1 non unique -_- .
Little debugging and i could confirm a pattern:
If these 2 migrations occur in the same rails db:migrate then the add_index migration also overwrites the index created by the foreign_key migration. However in different migrations, it just adds another index.
I’m not sure if this is a feature or a bug. To solve for the cases where migrations run together(dev machines) or on servers where they are run separately I used remove_index if index_exists? .
This isn’t something “breaking or a huge discovery”, just something weird i found in my Rails Journey and something you should be aware about.
By analyzing the biggest open source repositories on GitHub (more info on the data below) we’ve seen that the contributors to any of those projects responds to only 2.3% of all issues on average. (Let a contributor be a person that commented on at least two issues which they didn’t open.)
This makes clear that for any bigger open source project, “Watching” the repository is resulting in a lot of spam for most of the people. If they don’t respond, notifying them was of no value for the discussion after all.
We can also observe that only very few project managers care for any significant portion of the issues. Only 6 of our human contributors in total care for more than every 5th issue at all. Here’s our heros:
However, we do see that 29.1% (117) of all contributors (402) are still subscribed to all notifications of the repository (watching it).
Switching to Polling
Many contributors switch to polling instead of watching the main repository.
However, we still see that the main maintainers keep watching the repository: without them, it’s very easy to miss out on new issues and it’s hard to make sure that the right people take a look at the right issues in a decentralized system.
In many communities we see home grown bots arising that apply labels and sometimes assign people based on keywords. This works especially well for automatically created issues (e.g. from sentry) but is not a full solution.
We’ve tried it. Contributors started mentioning keywords consciously and it didn’t really work for user reported issues.
We wouldn’t be GitMate if we didn’t strive for more. Our data suggests that people are spending way too much time on their notifications. We’ve maintained coala.io in the past and we know that reading through all of them is impossible even for core maintainers. Static keyword based automation doesn’t seem to be enough.
Since quite a while we’re hacking on an artificial intelligence that helps you dealing with this problem by analyzing exactly what every person in your team is discussing about on GitHub or GitLab and mentioning the ones who are important for solving any new issue.
GitMate is built as a full automated triaging solution. Right now it already mentions related developers in new issues, finds duplicates, labels issues and closes old issues. It is already used by companies like ownCloud and Kiwi.comand we’re looking for more beta testers.
We’ve scraped data from a lot of GitHub repositories. We only wanted to look at the biggest ones (measured by scraped file size, i.e. roughly amount of text over all issues communicated). We’ve excluded ‘ZeroK-RTS/CrashReports’ because no humans seem to be operating that repository. The results refer to statistics drawn from those repositories:
We have filtered out any account with bot in the username as well as the ownclouders account which is using GitMate.
If you’re interested in more information, we can share our Jupyter Notebook and the data with you — just hit us an email to email@example.com.
Phase 1 of GSoC has come to an end, and it has been an awesome one month of work
First and foremost, DocumentationStyleBear got merged after almost a year of
work. There are still quirks inside it, but it works for now.
Phase 2 deals with creating a DocBaseClass, that acts essentially as a
umbrella class for documentation related bears, where common functionality is
abstracted away to this class and the core functionality of the bear is left
upon the user to implement.
Then after the DocBaseClass is ready, its time to port the DocumentationStyleBear
to the new framework.
As for the quirks of DocumentationStyleBear itself, there are some bugs to be fixed
#4029, #1856, #4200
It has been a wonderful GSoC season at coala. The projects have been really exciting and I have been handed the mentorship of the project to enhance cobot.
What is cobot?
cobot is a bot used by the coala community to serve various purposes:
It is a way for newcomers to be easily invited to the community.
It assists the maintainers with various tasks such as issue assignment and reviewing.
It helps to search some documentation.
It exposes some fun websites such as Wolfram and lmgtfy to generate easy links.
Since it’s introduction it has become an integral part of the community. Members use it very frequently across chatrooms to automate various arduous tasks such as opening issues and changing their tags.
What has been achieved in this Phase?
Even though cobot is so integral to the community it was kind of hacked together initially. It had no unit testing and quickly done up in a not so clean manner. So in this phase it was a target to get a functioning bot which was at par on the previous bot including all features and testing.
Here are some detailed updates by my extremely delightful and awesome student Meet:
He has successfully completed all targets set for this phase and now onwards to the next…
One most important thing that is a target for this project is the ability to search across documentation and this search should give back results that are most relevant to the query. After much discussion over topic modeling and designing a search index we have landed on the resolution of doing a smart manual index because the documentation is just not dense enough to create a reliable automated technique.
My student has done a great job and has finished 100% of his milestone.
As it always happens, his preplanning was quite bad and needed to get shaped up. But now, after getting shaped up, it looks more realistically and should be enough for him to finish his whole work for the next phases.
This phase was quite easy, as mostly mockup and design was made. Looking forward for Alex to start implementing the real things in phase 2!
Last year’s project consisted of revamping the documentation extraction API and
creating a language-independent class that parses documentation.
A bear was also planned, but it never got merged because of some regressions.
Saurav’s work is to first get a working DocumenatationStyleBear merged. By
working, it means it should work as intended on the main coala repos. For
keeping it simple, only Python is being supported right now.
Niklas has been a lot of help. I can’t begin to say how he has been very
welcoming to both of us, and also has familiarized himself with the
documentation parsing codebase in a few days despite being a few days, and has
done a lot of code reviews.
I hope, with Saurav’s commitment and Niklas’ guidance, we can get the
documentation extraction and parsing working as a proof of concept for atleast
it was a long time without a blog post – If you want a very quick overview over great stuff happening at coala, GitMate, my life just read the headings For more details, I’m afraid you’ll actually have to read it.
coala gets 10 GSoC Students
Last year went away fast and a new GSoC is coming for coala and me.
First things first: as unbelievable as it is, coala got 10 slots for Google Summer of Code as an own organization (first year!). We received more than 50 applications out of which only one or two were spam.
Unfortunately that means we couldn’t take a lot of students. A lot. Good students. Great. Students.
We did have some serious problems during the application phase and there were things that could have been done better from the admin side – making the competition more fair. The truth is: we weren’t prepared for so many so good applications and our processes were not decentral enough; too much work was done by the admin team directly and not enough by the mentors. We’ve learned our lesson and will have some serious iteration on how to make better processes for GSoC in 2017.
On, to better parts. We now have 10 students who definitely deserved a slot out of which I have the honour to mentor/comentor two together with my git-mate Fabian Neuschmidt:
The bonding phase has started and this year we are using GitLab milestones more strictly to track all projects. For every project we have a milestone for every phase. Check https://gitlab.com/coala/GSoC-2017/milestones/ and you’ll immediately get an impression on the progress of each and every project.
Using GitLab Burndown Charts for GSoC is Awesome
It is super crucial to know wether your student is on track with his GSoC project so you can see if things go wrong. With GitLab we can use burndown charts to identify those issues earlier. This is the first time we’re doing this so everyone is a bit behind schedule, we’re just trying it out but as the GSoC progresses we’ll get more strict about this.
I’m very happy to work with Naveen and Hemang this year, they’re totally into their projects, highly motivated and really add something to the community – we can learn a lot with each other!
If you visit gitmate.io now you will find a fully functional web app that allows you to configure GitHub plugins. The coala code analysis plugin isn’t ready yet but we’re working on it full steam and the first few plugins are totally ready.
With GitMate you can do any automation for PRs, issues, etc. – any events on GitHub and soon other platforms as well (if we’re good we’ll have email support soon so GitMate can automatically review linux kernel patches :)).
We’ll give you more updates about gitmate at blog.gitmate.io if we find time to actually do blog. Are you thinking about using this in your OS project or company? Shoot me an email at firstname.lastname@example.org!
Thanks to the great Yuki who is constantly bored and thus lives a major part of his life in the caves below the DigitalOcean all coala websites and GitMate are now properly deployed and maintained.
Also we’re finally getting more and more content on blog.coala.io with the GSoC students (who we’re shamelessly forcing to blog) but also the coala community team with brilliant efforts like coala recipes and other fun contests. This is awesome
PyCon(s?) Come to EuroPython!
I’ve been travelling a bit. You might have seen me if you visited some PyCon. It’s a lot of fun and I’ve been meeting many many people! I’m seeing forward to EuroPython where we’re trying to get many coalaians. We’re also in contact with the PyTest community to maybe do a joint sprint or so – I’m so much seeing forward to that.
If you meet me at a conference, be sure to talk to me
We have always been active in engaging newcomers and teaching people about Open Source. It is only natural that we think and work towards helping pupils all over the world take this step and learn about contributing to open source. (If you are a teacher and reading this, reach out to us on coala.io/chat – we’re very interested in working with you and are also starting an initiative in germany to connect to schools.)
So let’s get some data: we had 37 successfully completed tasks. Our mentors wrote an impressive amount of 26 GCI tasks – some of which are multiple paged step by step guides that are still used for non GCI purposes.
An unimaginable huge part of the credit here goes to John Mark Vandenberg who mainly administered GCI for us and mentored a huge number of students by himself and helped us writing up the best possible tasks we could have. We are very thankful that we could build on his experience with the program and that we had his valuable input at every stage. Backstage, we had Mario Behling and Hong Phuc Dang from FOSSASIA working tirelessly so we could make this happen.
If you meet any of those – consider inviting them for a cup of coffee and thank them for what they are doing for our community, for FOSSASIA and for the Open Source education.