O Projeto Software Livre Bahia (PSL-BA) é um movimento aberto que busca, através da força cooperativa, disseminar na esfera estadual os ideais de liberdade difundidos pela Fundação Software Livre (FSF), possibilitando assim a democratização do acesso a informação, através dos recursos oferecidos pelo Software Livre. Esta busca tem seus alicerces fundados na colaboração de todos, formando um movimento sinérgico que converge na efetivação dos ideais de Liberdade, Igualdade, Cooperação e Fraternidade.
O Projeto Software Live Bahia é formado pela articulação de indivíduos que atuam em instituições publicas e privadas, empresas, governos ou ONGs, e demais setores da sociedade. Além disso o projeto não é subordinado a qualquer entidade ou grupo social, e não estabelece nenhuma hierarquia formal na sua estrutura interna.
DebConf15, testing debian packages, and packaging the free software web
30 de Agosto de 2015, 19:12 - sem comentários aindaThis is my August update, and by the far the coolest thing in it is Debconf.
Debconf15
I don’t get tired of saying it is the best conference I ever attended. First it’s a mix of meeting both new people and old friends, having the chance to chat with people whose work you admire but never had a chance to meet before. Second, it’s always quality time: an informal environment, interesting and constructive presentations and discussions.
This year the venue was again very nice. Another thing that was very nice was having so many kids and families. This was no coincidence, since this was the first DebConf in which there was organized childcare. As the community gets older, this a very good way of keeping those who start having kids from being alienated from the community. Of course, not being a parent yet I have no idea how actually hard is to bring small kids to a conference like DebConf. ;-)
I presented two talks:
-
Tutorial: Functional Testing of Debian packages, where I introduced the basic concepts of DEP-8/autopkgtest, and went over several examples from my packages giving tips and tricks on how to write functional tests for Debian packages.
- Video recording (webm, ~470MB)
- slides (PDF)
-
Packaging the Free Software Web for the end user, where I presented the motivation for, and the current state of shak, a project I am working on to make it trivial for end users to install server side applications in Debian. I spent quite some hacking time during DebConf finishing a prototype of the shak web interface, which was demonstrated live in the talk (of course, as usual with live demos, not everything worked :-)).
- Video recording (webm, ~450MB)
- slides (PDF)
There was also the now traditional Ruby BoF, where discussed the state and future of the Ruby ecosystem in Debian; and an in promptu Ruby packaging workshop where we introduced the basics of packaging in general, and Ruby packaging specifically.
Besides shak, I was able to hack on a few cool things during DebConf:
- debci has been updated with a first version of the code to produce britney hints files that block packages that fail their tests from migrating to testing. There are some issues to be sorted out together with the release team to make sure we don’t block packages unecessarily, e.g. we don’t want to block packages that never passed their test suite — most the test suite, and not the package, is broken.
- while hacking I ended up updating jquery to the newest version in the 1.x series, and in fact adopting it I guess. This allowed emeto drop the embedded jquery copy I used to have in the shak repository, and since then I was able to improve the build to produce an output that is identical, except for a build timestamp inside a comment and a few empty lines, to the one produced by upstream, without using grunt (.
Miscellaneous updates
-
Rails 4.2 in unstable: in order to support Diaspora (currently in experimental), and an upcoming Gitlab package (WIP). This requires quite some updates, NEW packages, and also making sure that Redmine is updated to a new upstream version. I did a few updates as part of this effort:
- rails 2:4.2.3-3
- ruby-arel 6.0.3-1
- ruby-coffee-script 2.4.1-1
- ruby-coffee-script-source 1.9.1.1-1
- ruby-commander 4.3.5-1
- ruby-execjs 2.4.0-1
- ruby-globalid 0.3.6-1
- ruby-jbuilder 2.3.1-1
- ruby-jquery-rails 4.0.4-2
- ruby-minitest 5.8.0-1
- ruby-multi-json 1.11.2-1
- ruby-rack-test 0.6.3-1
- ruby-sass-rails 5.0.3-1
- ruby-spring 1.3.6-1
- ruby-sprockets 3.3.0-1~exp2
- ruby-sprockets-rails 2.3.2-1~exp1
- ruby-sqlite3 1.3.10-1
- ruby-turbolinks 2.5.3-1
- rerun (NEW), a tool to launch commands and restart them on filesystem change. Very useful when writing sinatra/rack applications.
- vagrant: new upstream relaese, supporting VirtualBox 5.0
- pinpoint: new upstream release, ported to clutter-gst-3.0
- chake: new upstream release
- gem2deb: new release with several improvements, and a bug fix followup
- chef: fix installation of initscripts
- pry: fixed imcompatibility with new ruby-slop (RC bug)
- foodcritic: fixed test suite run during build (RC bug)
- library updates:
- ruby-grape-logging
- ruby-hashie (2 RC bugs)
- ruby-listen: new upstream release, fixed test suite (RC bug)
- ruby-rspec-retry: new upstrean release, fixed test suite (RC bug)
- ruby-dbf: new upstream release (sponsored, work by Christopher Baines)
- ruby-bootstrap-sass: new upstream release + fixed to work on non-Rails apps
- ruby-rails-dom-testing (NEW, dependency for rails 4.2)
- ruby-rails-deprecated-sanitizer (NEW, dependency for rails 4.2)
- ruby-rmagick new upstream release
- ruby-uglifier new upstream release
-
ruby-cri (RC bug)
- I was making source+arch:all uploads for a while, but this was my first ever source-only upload of an architecture-independent package to Debian, following the recent developments on the topic.
Instalando o docker host no Windows Server
30 de Agosto de 2015, 10:07 - sem comentários aindaJá tínhamos divulgado no lançamento da versão 1.6 que o cliente Docker já estava disponível para instalação no Windows, agora vamos demonstrar que é possível instalar o docker host no Windows Server 16 TP3.
O ambiente ainda está em beta, tanto o Windows Server 16 TP3, quanto a compatibilidade do Docker para Windows. A função push por exemplo ainda não está habilitada.
Mesmo sendo beta, acho que vale a pena testar pra ao menos entender como funciona.
Caso você use GNU/Linux e não queira instalar o Windows no seu disco rígido, você pode usar o virtualbox pra isso, mas não se esqueça de instalar a versão mais nova dele. Eu precisei instalar o VirtualBox da Sun na versão 5.0.2.
Primeiro baixe o CD do Windows Server 16 TP3.
Depois crie uma nova máquina virtual do tipo “Windows” e versão “Other Windows 64-bit”
Monte a ISO que acabou de baixar e inicie a instalação.
Quando solicitar qual o tipo de instalação, instale a que não requer experiência de usuário.
A instalação será bem rápida. (Por mais incrível que pareça). Ele solicitará a mudança de senha, para alternar entre campos de senha use “tab” e não “enter”.
Quando lhe for concedido acesso ao console digite:
powershell
Depois digite:
wget -uri http://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1
Após baixar o script, digite o comando abaixo para instalar o docker:
C:\.\ContainerSetup.ps1
Ele reiniciará sua máquina virtual e demorará um pouco nessa tela (Caso sua internet seja tão lenta quanto a minha):
Essa é a tela que demonstra que o Docker foi instalado com sucesso:
Obs: Caso apresente um erro de “timeout” tente o comando novamente Isso aconteceu comigo e logo em seguida funcionou tranquilamente. Apenas demorou um pouco
Para iniciar uma máquina é muito simples
docker.exe run -it windowsservercore cmd.exe
Infelizmente ainda não é possível executar containers do GNU/Linux no Docker Host Windows, nem vice-versa, porem é possível usar os comandos docker e Dockerfile da mesma forma, apenas usando os as chamadas de comandos “RUN” equivalentes com cada sistema operacional.
O Docker funciona no Windows de forma semelhante ao GNU/Linux, ou seja, a promessa é que ele execute os containers de forma isolada também.
Divirta-se!
Fonte:
http://www.virtualclouds.info/?p=3393
https://blog.docker.com/2015/08/tp-docker-engine-windows-server-2016/
Lançamento do Docker 1.8
25 de Agosto de 2015, 2:05 - sem comentários aindaNão para de ter novidade no Docker! Mais um lançamento e muitas novidades.
Docker Content Trust
Agora é possível assinar as imagens com sua chave privada antes de enviar para nuvem, com isso é possível validar as imagens e evitar que aconteçam fraudes no meio. Isso torna a solução como um todo bem mais segura.
Quer ler um pouco mais sobre o assunto? Veja esse link.
Docker Toolbox
Novo pacote de instalação para Mac OS X e Windows. Que conta com Docker client, Machine, Compose(Esse só para Max OS X) e virtualbox. Tudo que você precisa.
Mais informações sobre o ToolBox? Leia aqui.
Docker Engine 1.8
Essa nova versão da engine do Docker trás as novidades mais impactantes desse lançamento! Lembra que no lançamento que divulguei aqui, foi informado que o Docker tinha suporte a logs? Então, agora esse suporte aumentou, é suportado GELF e Fluentd.
Agora está estável a possibilidade de volumes de empresas terceiras! Tal como Blockbridge, Ceph, ClusterHQ, EMC e Portworx.
O binário docker agora tem suporte a enviar arquivos para o container:
docker cp foo.txt mycontainer:/foo.txt
O parâmetro “ps” agora tem suporte a modificação de formato “–format”.
E por fim, agora as configurações de cliente docker são armazenadas em ~/.docker. No caso de precisar executar múltiplas configurações em uma só máquina, você pode usar o parâmetro –config ou a variável de ambiente DOCKER_CONFIG.
Como adicionar virtualhost aos logs do Varnish
15 de Agosto de 2015, 3:00 - sem comentários aindaHá algum tempo atrás escrevi um post mostrando como configurar o
Varnish
para escrever logs num formato
modificado do Combined Log Format do Apache
, esta modificação
foi feita para adicionar o virtualhost %v
no início de cada registro do log
e na sintaxe do Apache
se parece com o seguinte:
LogFormat "%v %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
Isto foi feito com o varnishncsa-vhost, um script que faz
o Varnish
armazenar logs seguindo o formato acima, este script deveria estar
obsoleto já que versões recentes do Varnish
suportam customizar o formato dos
logs através da opção -F
, mas um problema no pacote Debian impede de fazer
isto do “jeito certo”™.
Este problema foi citado em Workaround for broken varnishncsa logging due to shell mishandling of spaces in LOG_FORMAT variables e algumas soluções foram sugeridas, mas todas elas tem um “ar” de armengue. O problema já foi relatado no Debian em #657449 varnishncsa: please add a config option to allow a custom logging format (patch) mas ainda não foi solucionado.
Porque estou contando esse “blá blá blá”?
Recentemente precisei alterar o formato dos logs do Varnish
em um servidor de
produção e acabei utilizando o varnishncsa-vhost
novamente e ele funcionou
muito bem, isto me salvou dos sedutores “armengues que quebram na próxima
atualização”.
Então se isto for útil para você de alguma forma utilize o repositório abaixo,
eu subi uma nova versão do pacote Debian do varnishncsa-vhost
lá:
deb http://debian.joenio.me unstable/
c3video for debconf #5
14 de Agosto de 2015, 15:13 - sem comentários aindaThis is a follow-up to my previous post related the DebConf videoteam using a new software stack for the next conferences: http://acaia.ca/~tiago/posts/c3video-for-dc-take-4/.
This is about the encoding step from C3TT, mostly done by the script named D-encoding.
We can have many different encoding templates in the system. They're XSLT files which will generate the XML needed to create the local encoding commands. We can have more than one encoding template assigned for a conference.
XSLT encoding templates
A general comment: each meta ticket (say, the original ticket with meta info about the talk) will generate two children tickets over time, a recording one and a encoding one, with their own states. If things go wrong, a ingest ticket is created. More details can be checked here.
Children tickets
So I've got the proper encoded files in my Processing.Path.Output directory. The ticket is marked as encoded by the script. There's also a postencoding step, executed by E-postencoding. As I could understand, it's intended to be a general post-processing hook for encoded files. For instance, it can produces an audio file and make it available on Auphonic service. As we won't use that, we may want to set the property Processing.Auphonic.Enable to no.
The next step starts from the operator side. Just going to the Releasing tab in the web interface, choosing the ticket to check and doing a quick review in the encoded file:
Releasing tab
Then, if everything looks fine, click Everthing's fine:
Check encoded file
From this point the ticket will be marked as checked and the next script (F-postprocessing) will take care of pushing the video(s)/audio(s) to the target place, as defined by Publishing.UploadTarget. I had to set myself the encoding template property EncodingProfile.Extension. We can optionally set Publishing.UploadOptions. (keep that in mind, seems not documented elsewhere)
So I have now the first DebCamp encoded video file uploaded to an external host, entirely processed using the C3TT software stack. We may also want to use a very last script to release the videos (eg. as torrents, to different mirrors and other onlive services) if needed. This is script-G-release.pl, which unlike others, won't be run by the screen UI in the sequence. It has some parameters hardcoded on it, although code is very clear and ready to be hacked. It'll also mark the ticket as Released.
Released!
That's all for now! In summary: I've been able to install and set C3TT up during a few days in DebCamp, and will be playing with it during DebConf. In case things go well we'll probably be using this system as our video processing environment for the next events.
We can access most CCC VoC software from https://github.com/voc. By having a look at what they're developing I feel that we (DebConf and CCC) share quite the same technical needs. And most important, the same spirit of community work for bringing part of the conference to those unable to attend.
DebCamp was warm!
c3video for debconf #4
13 de Agosto de 2015, 14:23 - sem comentários aindaThis is a follow-up to my previous post related the DebConf videoteam using a new software stack for the next conferences: http://acaia.ca/~tiago/posts/c3video-for-dc-take-3/.
As mentioned before, C3TT provides a set of scripts which will work in background for most reviewing and video processing tasks. The first will just check if the talk is done and mark the related ticket as recorded.
The second script, B-mount4cut, does a nice job by mounting a custom fuse filesystem providing the following files (more detailed explanation here:
uncut.dv: full original dv file used as input file for the final trimming.
project.kdenlive: Kdenlive project file for the operator. Once it's saved with the trim marks, fuse-vdv will do a parse on it and use the marks for cutting.
cut.dv: contains only the frames between the trim marks extracted from project.kdenlive.
cut-complete.dv: contains the frames between the trim marks extracted from project.kdenlive, plus a prepended intro and an appended video. Paths for these files should be previously set in the web interface as properties Processing.Path.Outro and Processing.Path.Intros. The outro file can be, for instance, an appended video file with a common thanks for sponsors. The intro files is usually an individual frame for each talk, being a colorful presentation poster. We can also set the intro duration in the Processing.Duration.Intro property.
cut.wav: demuxed audio from cut.dv
Note: in fact, fuse-vdv provides virtual video files as a concatenation of original input files, so avoiding copying large and redundant data. Ideally, these fuse mountpoints will be shared over the network for operators via glusterFS, but I'll skip that for now.
After adding the trimming marks and saving the project using Kdenlive, the operator should go to the web interface and mark the ticket as cut:
Mark ticket as cut
Note: I've been able to do the marks in Kdenlive using double-click in the video, then editing the crop start/end options. The buttons "[" and "]" didn't work in my Kdenlive version for some unknown reason.
In the current DebConf video environment I had to make a link builder for translating the path/file names to the C3TT format. So, for the Amsterdam/2015-08-13/09:57:02.dv we should have a amsterdam-2015.08.13-09_57_02.dv symlink.
From now the system will delivery the next tasks to C-cut-postprocessor script. This script will just check the marks from the Kdenlive project file and do the cutting job. So far it has worked perfectly here for the first video sessions in DebCamp, with zero hack in the original code, wow!
Next post will be about the encoding script, named D-encoding.
c3video for debconf #3
12 de Agosto de 2015, 22:07 - sem comentários aindaThis is a follow-up to my previous post related the DebConf videoteam using a new software stack for the next conferences: http://acaia.ca/~tiago/posts/c3video-for-dc-take-2/.
An outdated documentation for current subject is available at https://wiki.fem.tu-ilmenau.de/streaming/projekte/c3/28c3/crs/pipeline. Although the system may work differently nowadays, the basic idea remains the same. A newer, but incomplete documentation can be found in https://repository.fem.tu-ilmenau.de/trac/c3tt/wiki. Btw, CCC people from #voc at hackint.eu have been very kind and supportive.
I've set an instance of C3TT for DebConf15 in http://c3tt.acaia.ca/. If you want to play with it just ping me in #debconf-video at oftc. As you can see, we can keep a single external C3TT server for all Debian events, without much work left to the local side, doesn't it sound amazing?
Setting a new conference
Go to Projects, then Create.
In the project area we'll need to import the Tickets. Tickets will come from the schedule file, which is a XML as generated from frab. With a minor hack we've been able to make the schedule XML from DebConf Summit quite compatible to it (kudos cate!):
Importing tickets
By importing the schedule from https://summit.debconf.org/debconf15.xml we'll be asked from which rooms we want to import the events. Usually those that have video coverage will be selected:
Choosing rooms
Then, we may want to exclude some talks that we won't provide video:
Choosing talks
We're also required to adjust some Properties for a given conference. An example with some explanation of these properties is availabe at https://c3voc.de/wiki/c3tracker. For my initial tests the ones below seem to be enough:
Setting properties
The backend: basic understand
The screen UI mentioned above will run a set of scripts in background which will automate most of the tasks, preparing videos for cutting to deploying them in different online services.
Tab 0: A-recording-scheduler
Each 30 seconds it will check if there's any ticket in the state scheduled or recording. It's based in the start/end datetime of the talk, so the ticket will be kept as scheduled (current < talk start), or marked as recording (start => current =< end) or recorded (current > end).
Tab 1: B-mount4cut
Each 30 seconds it will check if there's any ticket in the state recorded. That means the talk is already finished, and the raw video file is available in the path which has previously been set as a property (Path.Capture) in the web interface.
For each ticket marked as recorded it will try to find the related video file in the capture path. The file format should be <room>-YYYY-MM-DD_HH-MM-SS.<extension>. The script will then use fuse-vdv to create a custom filesystem with some needed files for human interaction (fancy stuff!).
Here's an example of talks in a room called Heidelberg, after being recorded and auto-mounted by the B-mount4cut script:
Mounting custom fuse FS
The human interaction is just a short review process using Kdenlive. The reviewers will access these files via a glusterFS network share. There's even a Debian Virtualbox image provided for that, including all the necessary tools. I'm going into this right now and will report what I get in the next hours.
Hopefully the following scripts will also be covered, very soon-ish :)
Tab 2: C-cut-postprocessor
Tab 3: D-encoding
Tab 4: E-postencoding (auphonic)
Tab 5: F-postprocessing(upload)
DebCamp is fun.
c3video for debconf #2
11 de Agosto de 2015, 19:33 - sem comentários aindaThis is a follow-up to my previous post related the DebConf videoteam using a new software stack for the next conferences: http://acaia.ca/~tiago/posts/c3video-for-dc-take-1/.
Installing C3TT scripts
There's a video (in German) which gives an idea about how the C3TT works: https://www.youtube.com/watch?v=K-KHbAcTo9I
It basically gives the volunteers a web interface to cut and review the recordings, wich communicates with a set of scripts running in background which will automate some tasks.
"Installing" the set of scripts is just a matter of placing them in a common directory and installing some Perl dependencies, mostly which are already packaged for Debian.
First check it out from the svn repository (fun fact: the web interface is coded in php in a git repository, the scripts are mostly written in perl with a little of bash, in a subversion repository. Both the conference and media system is are in ruby :)
$ svn co https://subversion.fem.tu-ilmenau.de/repository/cccongress $ mkdir /opt/crs; mv cccongress/trunk/tools /opt/crs/
A few libraries required:
$ apt-get install libboolean-perl libmath-round-perl libdatetime-perl libwww-curl-perl libconfig-inifiles-perl libxml-simple-perl $ perl -MCPAN -e 'install XML::RPC::Fast'
In the web tracker create a project, go to all projects => workers and create a worker (I'll try to explain it later). Go edit the worker and we'll see the token and secret that should be used in the scripts to talk to the interface.
cd /opt/crs/tools/tracker3.0
Create a file tracker-profile.sh with the follow lines (using our correct values):
export CRS_TRACKER=http://localhost/rpc export CRS_TOKEN=2q24M7LW4Rk31YNW4tWKv8koNvyM3V4s export CRS_SECRET=5j8SyCS35W2SBk2XIM4IWeDUqF9agG1x
We also need to build and install the fuse-vdv package from trunk/tools (if working with dv files, otherwise fuse-ts package).
Next step is run the scripts. Fortunatelly a nice UI has been done using screen with multiple tabs, which can be alternated using a <Ctrl+a> <number> combination.
cd /opt/crs/tools/tracker3.0 && /start-screenrc-dv.sh
We'll get the following:
Screen tabs from C3TT
In a next post I'll try to explain a bit of how the web system work together with the scripts and how to do a basic setup for a real conference. I hope to get there soon!
c3video for debconf #1
11 de Agosto de 2015, 12:41 - sem comentários aindaSome context
DebConf has provided live streaming and recordings from talks since 2004. We used to work with a set of scripts which worked together with Pentabarf for most of videoteam tasks, including volunteer shifts coordination, reviewing process, encoding and deployment.
Things has changed since DebConf14, when Pentabarf was replaced by Summit as the conference management system. Without those old Pentabarf features and hacks we had to invent new ways of dealing with the video workflow in DebConf. We gave veyepar a try in 2014, and we will probably do it again in DebConf15. However, for a long term solution we are considering the software stack from CCC Video Operation Center, which so far I see as a free, solid and community-oriented mix of Debian-friendly tools.
I will be reporting the progress on setting up and testing the CCC software strucutre for DebConf. Having the opportunity of being in DebCamp together with other videoteam folks will certainly make things easier :)
Setting up the CCC Ticket Tracker
C3TT is a ticket/tracker system used by CCC for reviewing/encoding process.
The web side of C3TT is written in PHP and can be cloned from http://git.fem.tu-ilmenau.de/cccongress.git. Some documentation is available at https://repository.fem.tu-ilmenau.de/trac/c3tt/wiki and from https://c3voc.de/wiki/c3tracker.
What I've done so far to get it working:
Installing some dependencies:
$ apt-get install postgresql-9.4 php5-pgsql php5-xsl postgresql-contrib-9.4 php5-xmlrpc php5
Creating database and users:
$ su -s /bin/bash postgres $ createuser -DRS dc15 $ createdb -O dc15 c3tt $ psql postgres=# ALTER ROLE dc15 WITH PASSWORD 'xxx';
Basic site config using lighttpd:
$HTTP["host"] =~ "c3tt\.your\.host" { server.document-root = "/var/www/c3tt/Public/" alias.url = ("/javascript/" => "/var/www/js/") url.rewrite-once = ( ".*\.(js|ico|gif|jpg|png|css)$" => "$0", "^(.*?)$" => "index.php/$1",) }
Running the installer script:
$ php -q Install/install.php
This will ask you some questions, then will create the config file and populate the database. At this point you should be able to access the ticket track system from your browser.
The set of scripts from C3TT doesn't need to be installed in the same host as the web side, they communicate via XML/RPC. In a next post I will report the installation and initial setup for these scripts.
Elixir in Debian, MiniDebconf at FISL, and Debian CI updates
5 de Agosto de 2015, 1:13 - sem comentários aindaIn June I started keeping track of my Debian activities, and this is my July update.
Elixir in Debian
Elixir is a functional language built on top of the Erlang virtual machine. If features imutable data structures, interesting concurrency primitives, and everything else that Erlang does, but with a syntax inspired by Ruby what makes it much more aproachable in my opinion.
Those interested in Elixir for Debian are encouraged to hang around in #debian-elixir on the OFTC IRC servers. There are still a lot of things to figure out, for example how packaging Elixir libraries and applications is going to work.
MiniDebconf at FISL, and beyond
I helped organize a MiniDebconf at this year’s FISL, in Porto Alegre on the 10th of July. The whole program was targetted at getting more people to participate in Debian, so there were talks about translation, packaging, and a few other more specific topics.
I myself gave two talks: one about Debian basics, “What is Debian, and how it works”, and second one on “packaging the free software web”, which I will also give at Debconf15 later this month.
The recordings are available (all talks in Portuguese) at the Debian video archive thanks to Holger Levsen.
We are also organizing a new MiniDebconf in October as part of the Latinoware schedule.
Ruby
We are in the middle of a transition to switch to Ruby 2.2 as default in Debian unstable, and we are almost there. The Ruby transition is now on hold while GCC 5 one is going on, but will be picked up as soon as were are done with GCC 5.
ruby-defaults has been uploaded to experimental for those that want to try having Ruby 2.2 as default before that change hits unstable. I myself have been using Ruby 2.2 as default for several weeks without any problem so far, including using vagrant on a daily basis and doing all my development on sid with it.
I started taking notes about Ruby interpreter transitions work to make sure that knowledge is registered.
I have uploaded minor security updates of both ruby2.1 and ruby2.2 to unstable. They both reached testing earlier today.
I have also fixed another bug in redmine, which I hope to get into stable as well as soon as possible.
gem2deb has seen several improvements through versions 0.19, 0.20, 0.20.1 and 0.20.2.
I have updated a few packages:
- ruby-rubymail
- ruby-ferret
- ruby-omniauth
- ruby-hashie
- ruby-rack-accept
- chef-zero
- nailgun-agent
- ruby-serialport
- ruby-gnome2
- ruby-mysql2
- ruby-dataobjects-postgres
- ruby-standalone
- thin
- ruby-stringex
- ruby-i18n
Two NEW packages, ruby-rack-contrib and ruby-grape-logging ,were ACCEPTED into the Debian archive. Kudos to the ftp-master team who are doing an awesome job reviewing new packages really fast.
Debian Continuous Integration
This month I have made good progress with the changes needed to make debci work as a distributed system with one master/scheduler node and as many worker nodes (running tests) as possible.
While doing my tests, I have submitted a patch to lxc and updated autodep8 in unstable. At some point I plan to upload both autodep8 and autopkgtest to jessie-backports.
Sponsoring
I have sponsored a few packages:
- ruby-rack-mount, ruby-grape-entity, and ruby-grape for Hleb Valoshka.
- redir and tmate twice for Lucas Kanashiro.
- lxc to wheezy-backports for Christian Seiler.
Minha visão da Agenda Digital do MinC
2 de Agosto de 2015, 3:45 - sem comentários aindaFui convidado para participar do evento Agenda Digital do Ministério da Cultura. E esse encontro me surpreendeu desde sua organização a seus resultados. Tudo muito simples e direto, mas com conquistas tão expressivas. Não há dúvidas que esse foi um dos melhores eventos de TI que já participei.
De forma colaborativa, chegamos a conclusão de quais seriam os temas mais importantes para se debater e passamos então para o ponto que eu julgo ter sido um dos mais importantes para o sucesso desse evento; Os desafios.
Os desafios não eram hipotéticos. Eram demandas concretas do governo, todos focados em soluções livres e maduras. Alguns dos desafios foram sendo moldados e refatorados ao longo do evento, outros chegamos a conclusão que não valiam a pena ser conduzido naquele momento e teve até desafios novos que apareceram em conversas na mesa do café.
Tivemos palestras pela manhã, oficinas pela tarde e “mão na massa” a noite, que normalmente se alongava pela madrugada, a base de muito energético e muita adrenalina para conclusão dos desafios.
Os desafios realizados foram; Migração de domínio Active Directory para Samba4, que foi o nosso maior esforço do evento, tirando várias noites de sono da galera; Criação de imagem Docker do portal servicos.gov.br, que antes levava quase 1 hora para provisionar, a partir de agora levará poucos minutos; Analise do monitoramento do MinC, que teve como resultado um relatório preliminar dos problemas estruturais nessa coleta de dados; E por fim, mas não menos importante, o serviço de log centralizado com ELK, que possibilitou o “ponta pé inicial” para a coleta centralizada de toda essa informação que hoje se encontra pulverizada na instituição.
Os resultados apresentados vão muito além das tecnologias apresentadas, pois um grupo muito coeso e interessado demonstrou que é possível fazer mudança quando se tem vontade e pessoas interessadas. Acho que esse evento entra pra história, pois pode ser um “ponta pé inicial” para uma mudança drástica na forma de como o governo escolhe e operacionaliza suas tecnologias.
Agradeço imensamente o convite e fico aqui disposto para novas interações.
Segue abaixo os links para as palestras gravadas ao longo do evento:
Falando em transmissão e gravação de palestras, encontramos esse software livre, que foi nossa salvação para esse tipo de atividade.
Segue abaixo outras postagens de opinião sobre esse evento:
http://gutocarvalho.net/octopress/2015/08/01/minha-visao-da-agenda-digital/