Alberto Rodriguez (A.K.A bt0): Fedora Women’s Day Images

Share Button

Powered by WPeMatico

Share Button

Fedora Community Blog: FPgM report: 2018–38

Share Button

Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 29 Beta RC5 was declared Go and will release on 25 September.

I’ve set up weekly office hours in #fedora-meeting. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Help requests

Announcements

Upcoming meetings

Schedule

Fedora 29

  • Beta freeze is underway through September 18.
  • Beta release date is set for 25 September.

Fedora 29 Status

Changes

The numbers below reflect the current state of Bugzilla tracking bugs.

NEW/ASSIGNED 2
MODIFIED 0
ON_QA 39
(total) 41

Fedora 30 Changes

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

The post FPgM report: 2018–38 appeared first on Fedora Community Blog.

Powered by WPeMatico

Share Button

Harish Pillay 9v1hp: A healthcare IT foundation built on gooey clay

Share Button

Today, there was a report from the Solicitor General of Singapore about the data breach of the SingHealth systems that happened in July.

These systems have been in place for many years. They are almost exclusively running Microsoft Windows along with a mix of other proprietary software including Citrix and Allscript.  The article referred to above failed to highlight that the compromised “end-user workstation” was a Windows machine. That is the very crucial information that always gets left out in all of these reports of breaches.

I have had the privilege of being part of an IT advisory committee for a local hospital since about 2004 (that committee has disbanded a couple of years ago, btw).

Every year, budgetary proposals for updates, new versions etc., of the software that the advisory committee gets for consideration and possible approval. Almost always, I would be the exception in the committee in questioning the continued use of expensive proprietary software for these healthcare systems (a contributory factor to increasing health care costs). But because I am the lone contrarian voice, inevitably, the vote will be made to approve and hence continue, IMHO, the wasteful path of spending enormous amounts of monies in these proprietary systems.

I did try, many times, to propose using open source solutions like, for example, virtualization from KVM. This is already built in into the Linux kernel that you can get full commercial support from Red Hat (disclosure: I work for Red Hat). You pay a subscription and we make sure that the systems are running securely (via SELinux for a start) and that enterprise can focus on their core business. But no, they continued with VMware.

I did propose open source solutions like OpenEMR and many other very viable solutions for the National Electronic Medical Records system – but none of them were accepted. (It has been brought to my attention that there are plans to mandate private sector healthcare providers to use the NEHR. There is considerable opposition to it both from the hassle (from their point of view) and added costs since the solution is proprietary and expensive).

There were some glimpses of hope in the early years of being on the committee, but it was quickly snuffed out because the “powers that be” did not think open source solutions would be “good enough”. And open source solutions are still not accepted as part of the healthcare IT architecture.

Why would that be the case?

Part of the reason is because decision makers (then and now) only have experience in dealing with proprietary vendor solutions. Some of it might be the only ones available and the open source world has not created equivalent or better offerings. But where there are possibly good enough or even superior open source offerings, they would never be considered – “Rather go with the devil I know, than the devil I don’t know. After all, this is only a job. When I leave, it is someone else’s problem.” (Yeah, I am paraphrasing many conversations and not only from the healthcare sector).

I recall a project that I was involved with – before being a Red Hatter – to create a solution to create a “computer on wheels” solution to help with blood collection. As part of that solution, there was a need to check the particulars of the patient who the nurse was taking samples from. That patient info was stored on some admission system that did not provide a means for remote, API-based query. The vendor of that system wanted tens of thousands of dollars to just allow the query to happen. Daylight robbery. I worked around it – did screen scrapping to extract the relevant information.

Healthcare IT providers look at healthcare systems as a cashcow and want to milk it to the fullest extent possible (the end consumer bears the cost in the end).

Add that to the dearth of technical IT skills supporting the healthcare providers, you quickly fall into that vendor lock-in scenario where the healthcare systems are at the total mercy of the proprietary vendors.

Singapore is not unique at all. This is a global problem.

Singapore, however, has the potential to break out of this dismal state if only there is both technical, management and political leadership in the healthcare system. The type of leadership that would want to actively pursue by all means possible to make healthcare IT as low cost and yet supportable, reliable and more importantly, be able to create a domestic ecosystem to support (not via Government-linked companies).

I did propose many times to create skunkworks projects and/or run hackathons to create solutions using open source tools to seed the next generation of local solutions providers. As I write this, it has not happened.

To compound the lack of thought leadership, the push in the 2000s to “outsource IT” meant that what remaining technically skilled people there were, got shortchanged as the work went to these contract providers (some of these skilled people were transferred to these outsourcee firms but left shortly after, because it was just BS).

This also meant that over time, the various entities who outsourced IT were just relationship managers with the outsourcee companies.It is not in the interest of the outsourcee companies to propose solutions that could lower the cost overall as it could affect the outsourcee’s revenue model. So, you have a catch-22: no in-house IT/architecture skills and no interest at all on the part of the outsourcees to propose a lower cost and perhaps better solutions.

I would be happy, if asked, to put together a set of solutions that will steadily address all of the healthcare IT requirements/solutions. I want this to then trigger the creation of a local ecosystem of companies that can drive these solutions not only for Singapore’s own consumption as well as to export it globally.

We have the smarts to do this. The technical community of open source developers are, I am very confident, able to rise to the challenge. We need political thought leadership to make this so.

Give me the new hospital in Woodlands to make the solutions work. I want to be able to do as much of it using commercially supported open source products (see this for a discussion of open source projects vs open source products), and build a whole suite of supportable open source solutions that are open to the whole world to innovate upon. It would be wonderful to see https://health.gov.sg/opensource/ (no it does not exist yet).

There are plenty of ancient, leaky, and crufty systems in the current healthcare IT systems locally. We need to make a clean break from it for the future.

The Smart Nation begs for it. 

Dr Vivian Balakrishnan said the following at GeekCamp SG 2015 (video):

I believe in a #SmartNation with an open source society and immense opportunities; where everyone can share, participate and co-create solutions to enrich and improve quality of life for ourselves and our fellow Singaporeans.

And for completeness, the actual post is here (it is a public page; i.e., no account needed):

I am ready.  Please join me.

Powered by WPeMatico

Share Button

Solanch69: Software Freedom Day – SFD

Share Button

Este es mi resumen a la invitacion del Software Freedom Day realizado en Ica 2018

Anuncios

Powered by WPeMatico

Share Button

Solanch69: CONECIT 2018

Share Button

El Congreso Nacional de Estudiantes de Computación, Innovación y Tecnologías (CONECIT), es un evento anual que se realiza en Perú. Los asistentes tienen la oportunidad de disfrutar talleres simultáneos y charlas en diferentes auditorios, con diferentes temas, este año se realizò en Iquitos, departamento de Loreto. En esta oportunidad, los representantes de Fedora Perù, presentaron… Sigue leyendo CONECIT 2018

Powered by WPeMatico

Share Button

Daniel Pocock: Resigning as the FSFE Fellowship's representative

Share Button

I’ve recently sent the following email to fellows, I’m posting it here for the benefit of the wider community and also for any fellows who don’t receive the email.


Dear fellows,

Given the decline of the Fellowship and FSFE’s migration of fellows into a supporter program, I no longer feel that there is any further benefit that a representative can offer to fellows.

With recent blogs, I’ve made a final effort to fulfill my obligations to keep you informed. I hope fellows have a better understanding of who we are and can engage directly with FSFE without a representative. Fellows who want to remain engaged with FSFE are encouraged to work through your local groups and coordinators as active participation is the best way to keep an organization on track.

This resignation is not a response to any other recent events. From a logical perspective, if the Fellowship is going to evolve out of a situation like this, it is in the hands of local leaders and fellowship groups, it is no longer a task for a single representative.

There are many positive experiences I’ve had working with people in the FSFE community and I am also very grateful to FSFE for those instances where I have been supported in activities for free software.

Going forward, leaving this role will also free up time and resources for other free software projects that I am engaged in.

I’d like to thank all those of you who trusted me to represent you and supported me in this role during such a challenging time for the Fellowship.

Sincerely,

Daniel Pocock

Powered by WPeMatico

Share Button

Fedora Community Blog: Submit a Fedora talk to DevConf.cz 2019

Share Button

DevConf banner showing buildings in Brno with fireworks in the background

DevConf.cz 2019 is the free, Red Hat sponsored community conference for open source contributors. Developers, admins, DevOps engineers, testers, documentation writers and other contributors to Linux, middleware, virtualization, storage, cloud, and mobile technologies will meet in Brno, Czechia January 25-27, 2019. Join Fedora and other FLOSS communities to sync, share, and hack on upstream projects together.

Ready to submit your proposal? The CfP is now open!

Looking for ideas? Check out this year’s primary themes on the DevConf.cz website. Some themes include IoT, kernel, cloud & containers, and — of course — Fedora. You can also look at last year’s schedule for inspiration. We’d like to have as many high-quality and useful talks about Fedora or about other technologies on the Fedora platform as possible.

Important DevConf.cz dates

  • CfP closes: October 26, 2018
  • Accepted speakers confirmation: November 12, 2018
  • Event dates: Friday January 25 to Sunday January 27, 2019

The post Submit a Fedora talk to DevConf.cz 2019 appeared first on Fedora Community Blog.

Powered by WPeMatico

Share Button

Richard Hughes: Speeding up AppStream: mmap’ing XML using libxmlb

Share Button

AppStream and the related AppData are XML formats that have been adopted by thousands of upstream projects and are being used in about a dozen different client programs. The AppStream metadata shipped in Fedora is currently a huge 13Mb XML file, which with gzip compresses down to a more reasonable 3.6Mb. AppStream is awesome; it provides translations of lots of useful data into basically all languages and includes screenshots for almost everything. GNOME Software is built around AppStream, and we even use a slightly extended version of the same XML format to ship firmware update metadata from the LVFS to fwupd.

XML does have two giant weaknesses. The first is that you have to decompress and then parse the files – which might include all the ~300 tiny AppData files as well as the distro-provided AppStream files, if you want to list installed applications not provided by the distro. Seeking lots of small files isn’t so slow on a SSD, and loading+decompressing a small file is actually quicker than loading an uncompressed larger file. Parsing an XML file typically means you set up some callbacks, which then get called for every start tag, text section, then end tag – so for a 13Mb XML document that’s nested very deeply you have to do a lot of callbacks. This means you have to process the description of GIMP in every language before you can even see if Shotwell exists at all.

The typical way parsing XML involves creating a “node tree” when parsing the XML. This allows you treat the XML document as a Document Object Model (DOM) which allows you to navigate the tree and parse the contents in an object oriented way. This means you typically allocate on the heap the nodes themselves, plus copies of all the string data. AsNode in libappstream-glib has a few tricks to reduce RSS usage after parsing, which includes:

  • Interning common element names like description, p, ul, li
  • Freeing all the nodes, but retaining all the node data
  • Ignoring node data for languages you don’t understand
  • Reference counting the strings from the nodes into the various appstream-glib GObjects

This still has a both drawbacks; we need to store in hot memory all the screenshot URLs of all the apps you’re never going to search for, and we also need to parse all these long translated descriptions data just to find out if gimp.desktop is actually installable. Deduplicating strings at runtime takes nontrivial amounts of CPU and means we build a huge hash table that uses nearly as much RSS as we save by deduplicating.

On a modern system, parsing ~300 files takes less than a second, and the total RSS is only a few tens of Mb – which is fine, right? Except on resource constrained machines it takes 20+ seconds to start, and 40Mb is nearly 10% of the total memory available on the system. We have exactly the same problem with fwupd, where we get one giant file from the LVFS, all of which gets stored in RSS even though you never have the hardware that it matches against. Slow starting of fwupd and gnome-software is one of the reasons they stay resident, and don’t shutdown on idle and restart when required.

We can do better.

We do need to keep the source format, but that doesn’t mean we can’t create a managed cache to do some clever things. Traditionally I’ve been quite vocal against squashing structured XML data into databases like sqlite and Xapian as it’s like pushing a square peg into a round hole, and forces you to think like a database doing 10 level nested joins to query some simple thing. What we want to use is something like XPath, where you can query data using the XML structure itself.

We also want to be able to navigate the XML document as if it was a DOM, i.e. be able to jump from one node to it’s sibling without parsing all the great, great, great, grandchild nodes to get there. This means storing the offset to the sibling in a binary file.

If we’re creating a cache, we might as well do the string deduplication at creation time once, rather than every time we load the data. This has the added benefit in that we’re converting the string data from variable length strings that you compare using strcmp() to quarks that you can compare just by checking two integers. This is much faster, as any SAT solver will tell you. If we’re storing a string table, we can also store the NUL byte. This seems wasteful at first, but has one huge advantage – you can mmap() the string table. In fact, you can mmap the entire cache. If you order the string table in a sensible way then you store all the related data in one block (e.g. the values) so that you don’t jump all over the cache invalidating almost everything just for a common query. mmap’ing the strings means you can avoid strdup()ing every string just in case; in the case of memory pressure the kernel automatically reclaims the memory, and the next time automatically loads it from disk as required. It’s almost magic.

I’ve spent the last few days prototyping a library, which is called libxmlb until someone comes up with a better name. I’ve got a test branch of fwupd that I’ve ported from libappstream-glib and I’m happy to say that RSS has reduced from 3Mb (peak 3.61Mb) to 1Mb (peak 1.07Mb) and the startup time has gone from 280ms to 250ms. Unless I’ve missed something drastic I’m going to port gnome-software too, and will expect even bigger savings as the amount of XML is two orders of magnitude larger.

So, how do I use this thing. First, lets create a baseline doing things the old way:

$ time appstream-util search gimp.desktop
real	0m0.645s
user	0m0.800s
sys	0m0.184s

To create a binary cache:

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.639s
user	0m0.565s
sys	0m0.057s

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.016s
user	0m0.004s
sys	0m0.006s

Notice the second time it compiled nearly instantly, as none of the filename or modification timestamps of the sources changed. This is exactly what programs would do every time they are launched.

$ df -h appstream.xmlb
13M	appstream.xmlb

$ time xb-tool query appstream.xmlb "components/component/id[@type=desktop][firefox.desktop]"
RESULT: firefox.desktop
RESULT: firefox.desktop
RESULT: firefox.desktop
real	0m0.009s
user	0m0.007s
sys	0m0.002s

9ms includes the time to load the file, search for all the components that match the query and the time to export the XML. You get three results as there’s one AppData file, one entry in the distro AppStream, and an extra one shipped by Fedora to make Firefox featured in gnome-software. You can see the whole XML component of each result by appending /.. to the query. Unlike appstream-glib, libxmlb doesn’t try to merge components – which makes it much less magic, and a whole lot simpler.

Some questions answered:

  • Why not just use a GVariant blob?: I did initially, and the cache was huge. The deeply nested structure was packed inefficiently as you have to assume everything is a hash table of a{sv}. It was also slow to load; not much faster than just parsing the XML. It also wasn’t possible to implement the zero-copy XPath queries this way.
  • Is this API and ABI stable?: Not yet, as soon as gnome-software is ported.
  • You implemented XPath in c‽: No, only a tiny subset. See the README.md

Comments welcome.

Powered by WPeMatico

Share Button

Charles-Antoine Couret: [F29] Participez à la journée de test consacrée à Silverblue

Share Button

Aujourd’hui, ce jeudi 20 septembre, est une journée dédiée à un test précis
: sur Silverblue. En effet, durant le cycle de développement, l’équipe
d’assurance qualité dédie quelques journées autours de certains composants ou
nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit
de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Silverblue est le nom de code pour Fedora Workstation à la sauce Atomic. Jusque là seulement
la variante Cloud en bénificiait.

L’objectif de cette version est de proposer en somme une Fedora Workstation
ayant des bases différentes de la version traditionnelle. En effet, l’objectif
est que les applications soient dans des conteneurs via Kubertenes, Flatpak ou
gérées via rpm-os-tree. Ce dernier permet de versionniser le système lors des
installations et mises à jour de paquets. En cas de problème, il est facile de
demander un retour en arrière au système pour retrouver un système stable. Le
système devient majoritairement en lecture seule également pour améliorer sa
fiabilité et sa sécurité. Sécurité qui comme poun une Fedora traditionnelle est
supervisée par SELinux.

Les tests à effecuer sont :

  • Démarrer et se connecter sans erreurs ;
  • Démarrer et arrêter des services comme le serveur SSH ;
  • Vérifier que SELinux est bien activé ;
  • Vérifier si GNOME Logiciels envoit des notifications de mises à jour ;
  • Vérifier si GNOME Logiciels tourne correctement : installer ou supprimer
    des paquets ;
  • Installer un logiciels sosu forme de Flatpak.

Des tests qui sont pour le coup assez faciles et rapides à mettre en
œuvre.

Comment y participer ?

Vous pouvez vous rendre sur la page des
tests
pour lister les tests disponibles et rapporter vos résultats. La
page wiki récapitule les modalités de la journée.

Si vous avez besoin d’aide lors du déroulement des tests, n’hésitez pas de
faire un tour sur IRC pour recevoir un coup de main sur les canaux
#fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur
le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne
savez pas faire, n’hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les
effectuer quelques jours plus tard sans problème ! Les résultats seront
globalement d’actualité.

Powered by WPeMatico

Share Button

Red Hat Security: Security Embargos at Red Hat

Share Button

The software security industry uses the term Embargo to describe the period of time that a security flaw is known privately, prior to a deadline, after which time the details become known to the public. There are no concrete rules for handling embargoed security flaws, but Red Hat uses some industry standard guidelines on how we handle them.

When an issue is under embargo, Red Hat cannot share information about that issue prior to it becoming public after an agreed upon deadline. It is likely that any software project will have to deal with an embargoed security flaw at some point, and this is often the case for Red Hat.

An embargoed flaw is easiest described as an undisclosed security bug; something that is not public information. Generally the audience that knows about an embargoed security issue is very small. This is so that the bug can be fixed prior to a public announcement. Some security bugs are serious enough that it’s in the best interest of all parties to not make the information public before vendors can prepare a fix. Otherwise, customers may be left with publicly-disclosed vulnerabilities but no solutions to mitigate them.

Each project, researcher, and distribution channel handles things a little differently, but the same principles still apply. The order is usually:

  • Flaw discovered
  • Flaw reported to vendor
  • Vendor responds
  • Resolution determined
  • Public announcement

Of course, none of this is set in stone and things (such as to whom the flaw is reported, or communications between a vendor and an upstream open source project) can change. The most important thing to remember is that all who have knowledge of an embargo need to be discreet when dealing with embargoed security flaws.

Q: We reported a flaw to Red Hat but I think some info may have been shared publicly. What should we do?
A: Contact Red Hat Product Security at secalert@redhat.com. We will work with you to assess any potential leaks and determine the best way forward.

It’s not uncommon for a flaw to be reported to a 3rd party before the news makes its way to Red Hat or upstream. This can be through a distribution channel, a security research company, a group like CERT, even another corporation that works on open source software. This means that whilst Red Hat may be privy to an embargoed bug, the conditions of that embargo may be set by external parties.

Public disclosure of a security flaw will usually happen on a predetermined date and time. These deadlines are important as the security community operates 24 hours a day.

Contact Red Hat Product Security should you require additional help with security embargoes.


English

Powered by WPeMatico

Share Button