Este blog é alimentado pela comunidade aqui na rede SoftwareLivre.org e pelo feed do Planet Mageia English.
Plan to get rid of ConsoleKit in GNOME 3.14
Before the start of the GNOME 3.14 cycle, Ryan Lorty announced his intention to make most GNOME modules depend on a logind-like API. The API would just implement the bits that are actually used. According to Ryan, most GNOME modules only use a selection of the logind functionality. He wanted to document exactly what we depend on and provide a minimal API. Then we could write a minimal stub implementation for e.g. FreeBSD as we’d know exactly what parts of the API we actually need. The stub would still be minimal; allow GNOME to run, but that’s it.
Not done for GNOME 3.14. Needs urgent help.
As didn’t see the changes being made, I asked Ryan about it during GUADEC. He mentioned he underestimated the complexity in doing this. Further, his interests changed. Result: still have support for ConsoleKit in 3.14, though functionality wise the experience without logind (and similar) is probably getting worse and worse.
Systemd user sessions
In future I see systemd user sessions more or less replacing gnome-session. The most recent discussions on desktop-devel-list indicated something like gnome-session would still stay around, but as those discussions are quite a while ago, this might have changed. We’re doing this as systemd in concept does what gnome-session does anyway, but then better. Further, we could theoretically have one implementation across desktop environments. I see this as the next generation of the various XDG specifications.
Coming across as forcing vs “legacy”
From what I understood, KDE will also make use of user sessions, logind, etc. However, they seem to do this by calling the existing software “legacy” and putting everything into something with a new name. Then eventually things will be broken of course. Within GNOME we often try to make things really clear for everyone. E.g. by using wording usch as “fallback”. It makes clear our focus is elsewhere and what likely will happen. I guess KDE is more positive. It might still work, provided
someone spends the effort to make it work. In any case, the messaging done by KDE seems to be very good. I don’t see any backlash, though mostly similar things is occurring between GNOME and KDE. There are a few exceptions, e.g. KWin maintainer explicitly tries to make the logind dependency as avoidable as possible. I find the KDE situation pretty confusing though; it feels uncoordinated.
Appearance that things work fine “as-is”
In a lot of distributions there is still a lot of hacks to make Display Managers, Window Managers and Desktop Environments work with the various specifications and software written loads of years ago. Various software still does not understand XDG sessions. They also do NOT handle ConsoleKit. Distributions add hacks to make this work, doing the ConsoleKit handling in a wrapper.
This is then often used in discussions around logind and similar software.
“My DM/WM/DE is simple and just works. There is no problem needing to be solved.”
There are various distributions which have as goal to make everything work, no regressions are allowed. If you use such a distribution and given enough manpower, enough hacks will be added which on short term ensures things work. However, those temporary hacks are hacks. E.g. if some software should support XDG sessions and it does not, eventually the problem is with that software.
Looking at various distributions, I see that those temporary hacks are still in place. Especially funny one is Mageia, where XDG session support is second class. The XDG session files are generated from different configuration files. This results in fun times when a XDG session file changes. Each time this happened, the blame is quickly with the upstream software. “Why are they changing their session files, it should just never change”. While the actual problem is that the upstream files are thrown away!
The support for unmaintained software has at various points resulted in preventable bugs in maintained software. While at the same time the maintained software is considered faulty. I find this tendency to blame utterly ridiculous.
There are many people who have some sort of dislike for systemd. In the QA session Linus had at Debconf, he mentioned he appreciates systemd, but the does NOT like the bughandling. In various other forums I see people really liking systemd, but still having their doubts about the scope of systemd.
When either liking or disliking systemd, it is important to express the reason clearly and in a non-agressive way. Unfortunately there are a
few people who limit their dislike in ways that’ll result in them being ignored completely. Examples are:
- Failure to understand that a blank “you cannot rely on it” statement is not helpful
If a project sees functionality within systemd that is useful, it is you’ll not get very far with stating that the project is bad for having used that. Or suggesting that there is some conspiracy going on, or that the project maintainer is an idiot. That’s unfortunately often the type of “anti-systemd advocacy” which I see.
- Failure to provide any realistic alternatives
Suggesting that systemd-shim is an alternative for logind. It’s a fork and it took 6 months or so to be aligned. Further, it’s a fork with as purpose to stay compatible. It’s headed by Ubuntu (Canonical) who are going to use systemd anyway.
The suggestions are often so strange that I have real difficulty summarizing them.
- Continuous repeat of non-issues
E.g. focussing on journald. Disliking e.g. udev or dbus, confusing the personal dislike as a reason everyone should not use systemd.
- Outright false statements
E.g. stuff “systemd is made only for desktops”, “all server admins hate it”. If you believe this to be true, suggest to do your homework. That, or staying out of discussions.
- Suggesting doom and gloom
According to some of the anti-advocacy, there’s a lot of really bad things in systemd. A few examples: my machine should continuously corrupt the journal files, my machine often doesn’t boot up, etc. As it’s not the case, it pretty much destroys any credibility I’d have for people making these claims.
Anyone trying systemd for the first time will also notice that it’ll just work. Consorting to this type of anti advocacy will just backfire because although systemd is NOT perfect, it does work just fine.
- Lack of understanding that systemd is providing things which are wanted
Projects have depended on systemd because it does things which are useful. As a person you might not need it. The other one beliefs he does need the functionality. Saying “I don’t” is not communication. At least ask why the other believes the functionality is useful!
- Lack of understanding that systemd is focussed to adding additional wanted functionality
Systemd often adds new functionality. A large part of that functionality might have been available before in a different way. It’s something which most people seem to worry about. It’s usually added as a response to some demand/need. Having a project listen to everyones needs is awesome!
- Personal insults
This I find interesting. The insults are not just limited to e.g. Lennart, the insults are to anyone who switched to systemd. A strategy to of having people use something other than systemd by insulting them is a very bad strategy to have. Especially if you lack any credibility with the very people you need and are insulting.
- Failure to properly articulate the dislikes
There are too many blank statements which apparently has to be taken as truths. Saying that something is just bad (udev, dbus, etc) will be ignored if the other person doesn’t see it as a problem. “That systemd uses this greatly used component is one of the reasons not to use it”. Such a statement is not logical.
- The huge issues aren’t
Binary logging by journald. Anti-advocacy turns this into one of the biggest problems. The immediate answer by anyone is going to be that you can still have syslog and log it as you do now. If you advocated this as a huge issue, then anyone trying to decide on systemd will quickly see that this huge issue is not an issue at all.
The attempt is to make people not use systemd. In practice, if the huge issues aren’t an issue, then the anti advocacy is actually helpful to the adoption. The biggest so called problems are easy, so anyone quickly gains confidence in systemd. Not what was intended!
- Outright trolling
For this I usually just troll back
What I suggest to anyone disliking systemd is to not make entire lists of easily dismissed arguments. Keep it simple (one is enough IMO), understandable but also in line with the people you’re talking to. Understand whom you’re talking to. Anything technical can often be sorted out or fixed, suggest not to focus on that.
Once the reason against is clearly explained, focus upon what can be done to change things. Here the focus should be on gaining trust and give an idea on what can be done (in a positive way).
Don’t ignore people who dislike systemd
Due to having seen the same arguments for at least 100 times, it’s easy to quickly start ignoring anyone who doesn’t like systemd. I’ve noticed someone saying on Google+ that the systemd should not be used because Lennart is a brat. Eventually enough is enough and it is time to tell these people to STFU. But that’s not according to one part of the GNOME Code of Conduct, “assume people mean well”. Not believing in people meaning well and ignoring it has bitten me various times.
Turns out, this person is concerned that his autofs mounted home directories won’t be supported some time in the future. So this person does follow what Lennart writes. While it appeared to me he’s just someone repeating the anti-advocacy bit, he has a valid concern. I still think it is unacceptable to call people names and said so, but it
is equally important to ensure things are still possible.
Can a “not supported” still be made to work?
Systemd developers are quick to point out that something is not supported. E.g. a kernel other than Linux. A libc other than glibc. Some use cases are not. But there’s a important thing to know: would the usecase be impossible, or would it take way more effort?
The type of effort is also important. For a different kernel/libc, you’d need a developer with good insight into these things. For others, it might be possible by customizing things. I assume the autofs homedirs will always be possible, just not always taken into account.
If it is not supported, but can be used anyway if you’re an “ok” sysadmin, that’ll mean for most people it’ll be possible. A “not supported by systemd” does therefore not 1 on 1 relate to impossible. If you want a different libc but you’re are a sysadmin and not a developer that’s quickly seen as impossible. While another “not supported” is actually perfectly possible.
IMO it is good that not everything is supported. Ensure that whatever is supported works really well. But at the same time, I think more focus should be on ensuring people do understand that a “not supported” does not mean “cannot work”.
My opinion on systemd as a release team member
I like *BSD. I like avoiding unneeded differences, this easies portability.
There are some interesting tidbits I’ve learned. Apparently OpenBSD is a GSoC student working on providing alternative implementations for hostnamed, timedated, localed and logind. I don’t think it’s enough, because it needs to be maintained fully. I further think that a logind alternative cannot be written together with the other bits just during a summer. Whatever it is, I think this will make it even easier to use systemd, not really what some of the anti-advocacy is intending, but oh well.
There seems to be another round of (temporary) increase of people disliking systemd. I’m pretty sure it’ll quiet down to normal levels again once Debian has systemd in a stable release for a few months.
Eventually they’ll notice that although systemd is not perfect, it just works. Unfortunately, this all doesn’t help in with the concerns I still have.
What to do with ConsoleKit?
…the lazy will never last.
September 19th! That’s the last day when we will accept artwork submissions!
We’re looking for a new default background that will be shipped with Mageia 5. We might also pick one or two runners up that will be bundled as alternative backgrounds. Ideas for screensavers and other artwork that you think we could use will also be appreciated.
If you want to win the background contest, here’s a few points to keep in mind:
- Historically speaking, the images chosen for the default background were simple abstract artworks that used the Mageia color palette.
- Photos of real life objects/people/plants/animals will not even be considered.
- Your image must be an original piece, and you must be able to provide source files (xcf or svg). If you can’t comply for a technical reason, please get in touch with us on the Atelier mailing list.
- Your image must have a sufficient resolution.
Next week São Paulo, one of the biggest cities in this planet, will host the second KDE Latin America Summit – or, how we call, LaKademy!
The event will be held in the FLOSS Competence Center of University of São Paulo, an interesting center where academia, enterprises, and community works together to create, to improve, and to research free and open source software.
In this event, Latin America community will try a new thing: we will have presentations about KDE stuffs. In specific KDE events of this part of the world it is more common to have only hacking sessions, and KDE presentations and short courses are given only in more general free software events. This time we organized an “open” event to non-KDE contributors too – maybe in the end of event they will be new gearheads.
The event program have a lot of topics: artwork, porting software from GTK to Qt (potential flamewar detected =D), KDE Connect, and more. I will present an introductory tutorial about C++ + Qt + KDE on Android. The main study case to be presented will be GCompris, and it will be interesting to show a software with a same source code compiling and running on Linux and Android. I will to show another software too: liquidfun, a C++ library to liquid simulation (it have an amazing demo in Android); VoltAir, a QML-based game developed by Google to Android (and open source!); and maybe KAlgebra, but I need to compile it yet.
Yes, it is C++ and QML on Android!
For hacking session I will reserve a time to study the Qt5/KF5 port of Cantor; it is time to begin this work. Other thing in this topic, I would like to talk with my KDE colleagues about a software to help scientific writing… well, wait for it until next year. =) I will work in KDE Brazil bots on social networks to fix some bugs too.
For meetings, I expect to discuss about communications tools (my propose is to use KDE todo to help with promo actions management), and to contribute with evaluation of KDE Brazil actions in the country. Since last LaKademy (2012, Porto Alegre), we continues to spread KDE in free software events, and we can to bring several KDE contributors to Brazil too. Now we must to think in more and news activities to do.
But LaKademy is not only about work. We will have some cultural activities too, for example the Konvescote at Garoa Hacker Club, a hackerspace in São Paulo, and some beers to drink in Vila Madalena district. More important, I am very happy to see my KDE colleagues again (Brazil, why so big?).
I see you at LaKademy!
(or in Akademy, but it is story to other post )
Interesting to pass from vacation with family in Croatia to France after 10 hours of drive and then the day after being in a plane, flying to Chicago to attend my 3rd LinuxCon, held this time in the mythic Chicago city.
While I arrived Monday evening, I had time to catch up some mail, make some conf calls on Tuesday before attending the first part of the event, which was the VIP dinner. An opportunity to talk to HP colleagues I met for the first time physically, even if we already interacted electronically previously.
Wednesday the 20th was the first day of the event which started as usual with Jim Zemlin’s Keynote. This time he chose to talk about what the Linux Foundation rules disallow: The Linux Foundation itself ! And more largely about the roles of foundations to support open source development, their key cleaning facility role.
Jim had a quite funny slide exaplining how everybody is seeing him, while what he is really doing is cleaning stuff so Linus, Greg and thouands of others could code and manage Linux.
He also announced the new LF certification program (Certified sysadmin and Engineers). While I understand the need of having more recognized Open Source ad Linux Professionals, unlinked to a company (such as the RHCE one) I wonder whether we were needing a new certification wile we do have LPI. I hope the 2 will cooperate to avoid again proliferation. Not that proliferation is bad per se. But why dedicate multiple times efforts to create training supports, manage registrations, … when someone already works on that, maybe in a different way, but maybe patchable to be adopted by the LF. Hopefully this will be solved somehow.
After that we had the also traditional Linux Kernel panel moderated by Greg Kroah-Hartmann with Andy, S, Andrew Morton and Linus Torvalds of course. Nothing really new came out. Anyway, it’s always refreshing to see our heros on stage full of confidence and hope for what they do.
Linus insisted once more on the fact he wants Linux to be more dominant on the desktop market. As a 21 years linux desktop user myself, I can only be in agreement with that. Where is however the docker of the desktop, that will make everybody want to change and move to it ? When people see my Mageia distro they’re always surprised how many stuff you can do out of the box with a Linux Desktop. Phones have helped people go away from the monopoly interface but Macs do not help bringing back people to Linux. If at least all people attending LinuxCon and developing FLOSS would run Linux, that would be great !
Then it was time for elective sessions. I chose first to know more about devstack.
Sean Dague from HP presented OpenStack in 10 Minutes with devstack
devstack pulls everything from git. As it heavily modifies your system so do rather that in a VM/Container. devstack launches tempest (the OpenStack test suite) at the end for the install. Sean insisted on the flow of requests generated inside OpenStack and demonstrated how you can easily modify the devstack environment and re-run it to test easily your modification.
devstack provides an easy way to support modifications through a conf file. Example given if you add
you’ll avoid waiting for an answers from the server in case of devstack exceeding the standard rate of queries.
You can also use localrc.conf to pass specific variables up to the right component.
In order to use it, you’d need 4GB RAM (recommended). It can run in a VM (cirros will work nested). Sean warned that it does not reclone git trees by default and clean.sh should put everything back in order (but cleans stuff !)
Good presentation, easy to follow and having a quick demo part which confirms that devstack is easy to use :-)
Then I attended Joe Brockmeier’s (Red Hat) presentation around Solving the package issue
Joe explained the notion of SW collections (living under /opt). It’s Available for RHEL, CentOS and Fedora. It brings a new scl command. If you type for example
scl enable php54 “app –option”
that app uses now php 5.4 while the rest of the system ignores it.
For that you’ll need new packages: scl-utils and scl-utils-build
There is a tool spec2scl to convert spec files to generate scl compatible packages.
For more info you can look at http://softwarecollections.org
A remark I made to myself and which was later explicitely said during the presentation is that scl is useful for RHEL to provide newer versions of SW onto the enterprise distributions, while it can also help provide older versions of SW into Fedora (which is moving so fast that not all SW can adapt !).
It’s a sort of Debian backports for RHEL.
Joe also presented rpm-ostree (derived from ostree, git-like for system binaries providing an immutable tree). Under development for now, so not completely usable and probably the least interesting solution.
He moved on with docker, but was pretty generic (on purpose) and seeing it as complementary to package management, whether I think docker is another way of deploying software, which is not caring of packages by providing a layered deployment approach. While I have packaged docker for Mageia, I’m not yet familiar eough with it to be sure of that, and I’m currently working on combining it with project-builder.org. So will comment later on on that.
Then it was time to animate the FLOSS Governance roundtable for which I was attending LinuxCon. I had what I think is probably the best panel to cover the vast topic with Eileen Evans from HP, Tom Callaway from Red Hat, Gary O’Neall from Source Auditor Inc., Bradley Kuhn from Software Freedom Conservancy (and of course 45 minutes wasn’t sufficient to talk about all the subjects part of this), but I think the interactions were very interesting and lively and hope the audience enjoyed them and learned new aspects of this capital topic for our ecosystem. Of course we talked about licenses, SPDX and its future new 2.0 version, but also of foundations (echoing Jim Zemlin’s keynote), contribution agreements or tax usage (Thanks Bradley !).
And this is just the first of a series of such round tables I’ll lead in future events, but more on that later on.
After that, I discussed with Bradley Kuhn and Jilayne Lovejoy about licenses, AGPL, and various related topics, and their feedback were as usual very rich.
Was then time to go back to the latest keynote sessions. The first one I followed was from a new company (for me) CEO, Jay Rogers from Local Motors who tries to make open hardware in the automotive sector.Worth looking at and following whether they will be successful.
Then, our own Eileen Evans was on stage to explain her view on the new FLOSS Professional. And I think at her place I’d have been even more impressed as she had a full room so probably some pressure to talk to all these devs and devops. And I think her voice showed that at the begining. But when she entered in the details of the presentation, she did as usual a great job and was particularly convincing. She showed how the FLOSS professional was more than others issued from diverse backgrounds, as she illustrated with her own one. She also showed the variety of activities that each of these people have to cope with everyday, again with an illustration of one of her day of work passing from a contract management or OSRB meeting to an OpenStack foundation board conf call.
And that approach of the new FLOSS professional was a convincing echo to Jim Zemlin’s call for more professionals and the focus on people that many speakers have underlined. The FLOSS ecosystem indeed needs so many various competencies in addition to developers and FLOSS is so ubiquituous that the lack of resources is delaying some projects. And Eileen explained why this notion of FLOSS Professional is arising now. Which is in short because FLOSS usage has moved from hobbist developing for themselves to professional developing during work hours. And she also covered the impact on companies where the work in network/communities, between peers is the rule compared to the siloed classical approach. And so companies need people understanding this way of working to evolve.
It was then time to catch a bus and enjoy discussing with peers at the Museum of Science and Industry during the evening event where we could also explore the museum.
Filed under: Event, FLOSS Tagged: Event, Gouvernance, HP, HPLinux, Linux, LinuxCon, Mageia, Open Source, project-builder.org
Guys, this one is coming from the heart.
We keep talking about how Mageia is this and Mageia needs that and Mageia did this totally awesome thing. But you know what? That’s a little bit misleading. See, Mageia isn’t some huge corporation, or even a small business. Mageia is an organization of people. People like you. And right now, we need people like you.
Anybody who has ever tried to get a job in the high tech industry knows that the vast majority of starter positions is in QA (that’s Quality Assurance, something we ALL need). That’s because that area needs the largest workforce. After all, it takes just a few brilliant minds to come up with excellent algorithms, a few more people to actually write the code, and vast legions of people who need to try every possible way to make it not work the way it is supposed to. Because somewhere, someone will find that one weird thing you can do that will crash the whole thing. And before we ship anything, be it a new operating system or just the smallest security update, we need assurances that it stands up to our high demand for quality.
And that’s where you come in.
Today, our QA team is very small. How small is it? It’s so small that right now there really is only one hardened QA expert. To begin with there were two, but one has been forced to cut back a lot of the work due to health issues. We hope that he gets well soon. Now most of the work falls on our only other QA expert. She’s doing her best to give the quality assurances we need while trying to train the small handful of volunteers. Don’t make any mistake, those “untrained professionals” are doing great work as well. In fact, they do an incredible amount of work of really top notch quality despite being rather new to QA, and they’re swamped too. We need more people like them.
Every time a new update comes out it needs to be tested on both supported releases (currently Mageia 3 and Mageia 4) for both architectures (32 and 64 bit). That means that every little security update or new feature needs to be tested 4 times before we can ship it to you, and most of that work is being done by just one person.
We need more people and you can help.
It’s really easy. With a little bit of training and some hands on experience you too can become a great QA tester.
Just head over to the QA portal and find out how you can help.
Mageia is people, and right now we need some help.