Ir para o conteúdo
ou

Software livre Brasil

Minha rede

 Voltar a planetas
Tela cheia Sugerir um artigo
 Feed RSS

Planeta DebianBrasil.org

11 de Fevereiro de 2010, 0:00 , por Software Livre Brasil - | Ninguém está seguindo este artigo ainda.

Antonio Terceiro (terceiro): pristine-tar updates

9 de Outubro de 2017, 15:06, por Planeta Debian Brasil - 0sem comentários ainda

Introduction

pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team.

pristine-tar goals are to import the content of s a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don’t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway.

The current state of the art, and perspectives for the future

pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while.

So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar.

Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn’t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic.

Recent improvements

Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can’t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit.

In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let’s look at the improvements made since pristine-tar 1.38, which was the version included in stretch.

pristine-tar 1.39: xdelta3 by default.

This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports.

The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release.

pristine-tar 1.40: improved compression heuristics

pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz:

In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB

pristine-tar 1.41: support for signatures

This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb.

pristine-tar 1.42: optionally recompressing tarballs

I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the “pristine” bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine.

In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all:

util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed

In other cases, if the current heuristics can’t produce a reasonably small delta, recompressing makes a huge difference. It’s the case for mumble_1.1.8.orig.tar.gz:

with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball

Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations.

Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what “too large” means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.



Antonio Terceiro (terceiro): Debconf17

14 de Agosto de 2017, 17:27, por Planeta Debian Brasil - 0sem comentários ainda

I’m back from Debconf17.

I gave a talk entitled “Patterns for Testing Debian Packages”, in which I presented a collection of 7 patterns I documented while pushing the Debian Continuous Integration project, and were published in a 2016 paper. Video recording and a copy of the slides are available.

I also hosted the ci/autopkgtest BoF session, in which we discussed issues around the usage of autopkgtest within Debian, the CI system, etc. Video recording is available.

Kudos for the Debconf video team for making the recordings available so quickly!



João Eriberto Mota Filho: Como migrar do Debian Jessie para o Stretch

18 de Junho de 2017, 17:58, por Planeta Debian Brasil - 0sem comentários ainda

Bem vindo ao Debian Stretch!

Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie).

Passos iniciais

  • A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais.
  • O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos:
# apt-get update
# apt-get dist-upgrade

Migrando

  • Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades):
deb http://ftp.br.debian.org/debian/ stretch main
deb-src http://ftp.br.debian.org/debian/ stretch main
                                                                                                                                
deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main
  • Depois, execute:
# apt-get update
# apt-get dist-upgrade

Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando:

# apt-get dist-upgrade

Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas.

Alterações em arquivos de configuração

Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico.

Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell:

Ficheiro de configuração '/etc/rsyslog.conf'
 ==> Modificado (por si ou por um script) desde a instalação.
 ==> O distribuidor do pacote lançou uma versão atualizada.
 O que deseja fazer? As suas opções são:
 Y ou I : instalar a versão do pacote do maintainer
 N ou O : manter a versão actualmente instalada
 D : mostrar diferenças entre as versões
 Z : iniciar uma shell para examinar a situação
 A ação padrão é manter sua versão atual.
*** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ?

A tela a seguir é um exemplo de mensagem via janela:

Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf.

Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo arquivo tenha sido instalado após uma escolha via shell, o arquivo original (o que você tinha anteriormente) terá o mesmo nome com a extensão .dpkg-old. No caso de escolha via janela, o arquivo será mantido com a extensão .ucf-old. Nos dois casos, você poderá ver as modificações feitas e reconfigurar o seu novo arquivo de acordo com as necessidades.

Caso você precise de ajuda para ver as diferenças entre os arquivos, você poderá usar o comando diff para compará-los. Faça o diff sempre do arquivo novo para o original. É como se você quisesse ver como fazer com o novo arquivo para ficar igual ao original. Exemplo:

# diff -Naur /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Em uma primeira vista, as linhas marcadas com "+" deverão ser adicionadas ao novo arquivo para que se pareça com o anterior, assim como as marcadas com "-" deverão ser suprimidas. Mas cuidado: é normal que haja algumas linhas diferentes, pois o arquivo de configuração foi feito para uma nova versão do serviço ou aplicativo ao qual ele pertence. Assim, altere somente as linhas que realmente são necessárias e que você mudou no arquivo anterior. Veja o exemplo:

+daemon.*;mail.*;\
+ news.err;\
+ *.=debug;*.=info;\
+ *.=notice;*.=warn |/dev/xconsole
+*.* @sam

No meu caso, originalmente, eu só alterei a última linha. Então, no novo arquivo de configuração, só terei interesse em adicionar essa linha. Bem, se foi você quem fez a configuração anterior, você saberá fazer a coisa certa. Geralmente, não haverá muitas diferenças entre os arquivos.

Outra opção para ver as diferenças entre arquivos é o comando mcdiff, que poderá ser fornecido pelo pacote mc. Exemplo:

# mcdiff /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Problemas com ambientes e aplicações gráficas

É possível que você tenha algum problema com o funcionamento de ambientes gráficos, como Gnome, KDE etc, ou com aplicações como o Mozilla Firefox. Nesses casos, é provável que o problema seja os arquivos de configuração desses elementos, existentes no diretório home do usuário. Para verificar, crie um novo usuário no Debian e teste com ele. Se tudo der certo, faça um backup das configurações anteriores (ou renomeie as mesmas) e deixe que a aplicação crie uma configuração nova. Por exemplo, para o Mozilla Firefox, vá ao diretório home do usuário e, com o Firefox fechado, renomeie o diretório .mozilla para .mozilla.bak, inicie o Firefox e teste.

Está inseguro?

Caso você esteja muito inseguro, instale um Debian 8, com ambiente gráfico e outras coisas, em uma máquina virtual e migre para Debian 9 para testar e aprender. Sugiro VirtualBox como virtualizador.

Divirta-se!

 



João Eriberto Mota Filho: Debian Developers living in South America

12 de Junho de 2017, 2:11, por Planeta Debian Brasil - 0sem comentários ainda

Well, I made this map using data from http://db.debian.org. As an example, currently, there are 27 Brazilian DDs. However, there are 23 DDs living in Brazil.

 



João Eriberto Mota Filho: OpenVAS 9 from Kali Linux 2017.1 to Debian 9

8 de Junho de 2017, 1:08, por Planeta Debian Brasil - 0sem comentários ainda

The OpenVAS

OpenVAS is a framework of several services and tools offering a comprehensive and powerful vulnerability scanning and vulnerability management solution. The framework is part of Greenbone Networks' commercial vulnerability management solution from which developments are contributed to the Open Source community since 2009.

OpenVAS is composed of some elements, as OpenVAS-Cli, Greenbone Security Assistant, OpenVAS Scanner and OpenVAS Manager.

The official OpenVAS homepage is http://www.openvas.org.

From Kali Linux 2017.1 to Debian 9

Ok, this is a temporary solution. Now (June 2017), Debian 9 wasn't released yet and OpenVAS 9 is not available in Debian in good conditions (it is in Experimental but a bit problematic). I think that we will have OpenVAS in backports soon.

The OpenVAS 9 from Kali is working perfect for Debian 9. So, to take advantage of this, adopt the following procedures:

1. Add a line to end of /etc/apt/sources.list file:

deb http://http.kali.org/kali kali-rolling main

2. Run:

# apt-get update
# apt-get install -t kali-rolling openvas

(if you want to simulate before install, add a -s option before -t)

3. Rermove or comment the previous line added to /etc/apt/sources.list file.

4. Run the following command to configure the OpenVAS and to download the initial database:

# openvas-setup

This step may take some time. Note that the initial password for user admin will be created and shown.

5. Finally, open a web browser and access the address https://127.0.0.1:9392 (use https!!!).

Some tips

To create a new administrative user called test:

# openvasmd --create-user test --role Admin

To update the database (NVTs):

# openvasmd --update
# openvasmd --rebuild
# service openvas-scanner restart

To solve the message "Login failed. Waiting for OMP service to become available":

# openvas-start

Enjoy!



Antonio Terceiro (terceiro): Papo Livre #1 - meios de comunicação

6 de Junho de 2017, 12:46, por Planeta Debian Brasil - 0sem comentários ainda

Acabamos de lançar mais um episódio do Papo Livre: #1 – meios de comunicação.

Neste episódio eu, Paulo Santana e Thiago Mendonça discutimos os diversos meios de comunicação em comunidades de software livre. A discussão começa pelos meios mais “antigos”, como IRC e listas de discussão e chega aos mais “modernos”, passo pelo meio livre e meio proprietário Telegram, e chega à mais nova promessa nessa área, Matrix (e o seu cliente mais famoso/viável, Riot).



Antonio Terceiro (terceiro): Debian CI: new data retention policy

28 de Maio de 2017, 21:20, por Planeta Debian Brasil - 0sem comentários ainda

When I started debci for Debian CI, I went for the simplest thing that could possibly work. One of the design decisions was to use the filesystem directly for file storagem. A large part of the Debian CI data is log files and test artifacts (which are just files), and using the filesystem directly for storage makes it a lot easier to handle it. The rest of the data which is structured (test history and status of packages) is stored as JSON files.

Another nice benefit of using the filesystem like this is that I get a sort of REST API for free by just explosing the file storage to the web. For example, getting the latest test status of debci itself on unstable/amd64 is as easy as:


$ curl https://ci.debian.net/data/packages/unstable/amd64/d/debci/latest.json
{
  "run_id": "20170528_173652",
  "package": "debci",
  "version": "1.5.1",
  "date": "2017-05-28 17:43:05",
  "status": "pass",
  "blame": [],
  "previous_status": "pass",
  "duration_seconds": "373",
  "duration_human": "0h 6m 13s",
  "message": "Tests passed, but at least one test skipped",
  "last_pass_version": "1.5.1",
  "last_pass_date": "2017-05-28 17:43:05"
}

Now, nothing is life is without compromises. One big disadvantage of the way debci stored its data is that there were a lot of files, which ends up using a large number of inodes in the filesystem. The current Debian CI master has more than 10 million inodes in its filesystem, and almost all of them were being used. This is clearly unsustainable.

You will notice that I said stored, because as of version 1.6, debci now implements a data retention policy: log files and test artifacts will now only be kept for a configurable amount of days (default: 180).

So there you have it: effective immeditately, Debian CI will not provide logs and test artifacts older than 180 days.

If you are reporting bugs based on logs from Debian CI, please don’t hotlink the log files. Instead, make sure you download the logs in question and attach them to the bug report, because in 6 months they will be gone.



Antonio Terceiro (terceiro): Papo Livre Podcast, episodio #0

23 de Maio de 2017, 13:00, por Planeta Debian Brasil - 0sem comentários ainda

Podcasts têm sido um dos meus passatempos favoritos a um tempo. Eu acho que é um formato muito interssante, por dois motivos.

Primeiro, existem muitos podcasts com conteúdo de altíssima qualidade. Meu feed atualmente contém os seguintes (em ordem de assinatura):

Parece muito, e é. Ultimamente eu notei que estava ouvindo episódios com várias semanas de atraso, e resolvi priorizar episódios cujo tema me interessam muito e/ou que dizem respeito a temas da atualidade. Além disso desencanei de tentar escutar tudo, e passei a aceitar que vou deletar alguns itens sem escutar.

Segundo, ouvir um podcast não exige que você pare pra dar atenção total. Por exemplo, por conta de uma lesão no joelho que me levou a fazer cirurgia reconstrução de ligamento, eu estou condenado a fazer musculação para o resto da minha vida, o que é um saco. Depois que eu comecei a ouvir podcasts, eu tenho vontade de ir pra academia, porque agora isso representa o meu principal momento de ouvir podcast. Além disso, toda vez que eu preciso me deslocar sozinho pra qualquer lugar, ou fazer alguma tarefa chata mas necessária como lavar louça, eu tenho companhia.

Fazia um tempo que eu tinha vontade de fazer um podcast, e ontem oficialmente esse projeto se tornou realidade. Eu, Paulo Santana e Thiago Mendonça estamos lançando o Podcast Papo Livre onde discutiremos software livre em todos os seus aspectos.

No primeiro episódio, partindo da notícia sobre a vinda de Richard Stallman ao Brasil nas próximas semanas, discutimos as origens e alguns conceitos fundamentais do software livre.



Antonio Terceiro (terceiro): Patterns for Testing Debian Packages

17 de Março de 2017, 1:23, por Planeta Debian Brasil - 0sem comentários ainda

At the and of 2016 I had the pleasure to attend the 11th Latin American Conference on Pattern Languages of Programs, a.k.a SugarLoaf PLoP. PLoP is a series of conferences on Patterns (as in “Design Patterns”), a subject that I appreciate a lot. Each of the PLoP conferences but the original main “big” conference has a funny name. SugarLoaf PLoP is called that way because its very first edition was held in Rio de Janeiro, so the organizers named it after a very famous mountain in Rio. The name stuck even though a long time has passed since it was held in Rio for the last time. 2016 was actually the first time SugarLoaf PLoP was held outside of Brazil, finally justifying the “Latin American” part of its name.

I was presenting a paper I wrote on patterns for testing Debian packages. The Debian project funded my travel expenses through the generous donations of its supporters. PLoP’s are very fun conferences with a relaxed atmosphere, and is amazing how many smart (and interesting!) people gather together for them.

My paper is titled “Patterns for Writing As-Installed Tests for Debian Packages”, and has the following abstract:

Large software ecosystems, such as GNU/Linux distributions, demand a large amount of effort to make sure all of its components work correctly invidually, and also integrate correctly with each other to form a coherent system. Automated Quality Assurance techniques can prevent issues from reaching end users. This paper presents a pattern language originated in the Debian project for automated software testing in production-like environments. Such environments are closer in similarity to the environment where software will be actually deployed and used, as opposed to the development environment under which developers and regular Continuous Integration mechanisms usually test software products. The pattern language covers the handling of issues arising from the difference between development and production-like environments, as well as solutions for writing new, exclusive tests for as-installed functional tests. Even though the patterns are documented here in the context of the Debian project, they can also be generalized to other contexts.

In practical terms, the paper documents a set of patterns I have noticed in the last few years, when I have been pushing the Debian Continous Integration project. It should be an interesting read for people interested in the testing of Debian packages in their installed form, as done with autopkgtest. It should also be useful for people from other distributions interested in the subject, as the issues are not really Debian-specific.

I have recently finished the final version of the paper, which should be published in the ACM Digital Library at any point now. You can download a copy of the paper in PDF. Source is also available, if you are into markdown, LaTeX, makefiles and this sort of thing.

If everything goes according to plan, I should be presenting a talk on this at the next Debconf in Montreal.



Antonio Terceiro (terceiro): testing build reproducibility with debrepro

3 de Março de 2017, 16:58, por Planeta Debian Brasil - 0sem comentários ainda

Earlier today I was handling a reproducibility bug and decided I had to try a reproducibility test by myself. I tried reprotest, but I was being hit by a disorderfs issue and I was not sure whether the problem was with reprotest or not (at this point I cannot reproduce that anymore).

So I decided to hack a simple script to that, and it works. I even included it in devscripts after writing a manpage. Of course reprotest is more complete, extensible, and supports arbitrary virtualization backends for doing the more dangerous/destructive variations (such as changing the hostname and other things that require root) but for quick tests debrepro does the job.

Usage examples:


$ debrepro                                 # builds current directory
$ debrepro /path/to/sourcepackage          # builds package there
$ gbp-buildpackage --git-builder=debrepro  # can be used with vcs wrappers as well

debrepro will do two builds with a few variations between them, including $USER, $PATH, timezone, locale, umask, current time, and will even build under disorderfs if available. Build path variation is also performed because by definition the builds are done in different directories. If diffoscope is installed, it will be used for deep comparison of non-matching binaries.

If you are interested and don’t want to build devscripts from source or wait for the next release, you can just grab the script, save it as “debrepro” somewhere on your $PATH and make it executable.



Tags deste artigo: debian planet planeta blogs