Richard Hughes: More fun with libxmlb

Share Button

A few days ago I cut the 0.1.4 release of libxmlb, which is significant because it includes the last three features I needed in gnome-software to achieve the same search results as appstream-glib.

The first is something most users of database libraries will be familiar with: Bound variables. The idea is you prepare a query which is parsed into opcodes, and then at a later time you assign one of the ? opcode values to an actual integer or string. This is much faster as you do not have to re-parse the predicate, and also means you avoid failing in incomprehensible ways if the user searches for nonsense like ]@attr. Borrowing from SQL, the syntax should be familiar:

g_autoptr(XbQuery) query = xb_query_new (silo, "components/component/id[text()=?]/..", &error);
xb_query_bind_str (query, 0, "gimp.desktop", &error);

The second feature makes the caller jump through some hoops, but hoops that make things faster: Indexed queries. As it might be apparent to some, libxmlb stores all the text in a big deduplicated string table after the tree structure is defined. That means if you do component then we only store just one string! When we actually set up an object to check a specific node for a predicate (for instance, text()='fubar' we actually do strcmp("fubar", "component") internally, which in most cases is very fast…

Unless you do it 10 million times…

Using indexed strings tells the XbMachine processing the predicate to first check if fubar exists in the string table, and if it doesn’t, the predicate can’t possibly match and is skipped. If it does exist, we know the integer position in the string table, and so when we compare the strings we can just check two uint32_t’s which is quite a lot faster, especially on ARM for some reason. In the case of fwupd, it is searching for a specific GUID when returning hardware results. Using an indexed query takes the per-device query time from 3.17ms to about 0.33ms – which if you have a large number of connected updatable devices makes a big difference to the user experience. As using the indexed queries can have a negative impact and requires extra code it is probably only useful in a handful of cases. In case you do need this feature, this is the code you would use:

xb_silo_query_build_index (silo, "component/id", NULL, &error); // the cdata
xb_silo_query_build_index (silo, "component", "type", &error); // the @type attr
g_autoptr(XbNode) n = xb_silo_query_first (silo, "component/id[text()=$'test.firmware']", &error);

The indexing being denoted by $'' rather than the normal pair of single quotes. If there is something more standard to denote this kind of thing, please let me know and I’ll switch to that instead.

The third feature is: “gaming mouse” and still get results that mention games, game and Gaming. This is also how you can search for words like Kongreßstraße which matches kongressstrasse. In an ideal world stemming would be computationally free, but if we are comparing millions of records each call to libstemmer sure adds up. Adding the stem() XPath operator took a few minutes, but making it usable took up a whole weekend.

The query we wanted to run would be of the form id[text()~=stem('?') but the stem() would be called millions of times on the very same string for each comparison. To fix this, and to make other XPath operators faster I implemented an opcode rewriting optimisation pass to the XbMachine parser. This means if you call lower-case(text())==lower-case('GIMP.DESKTOP') we only call the UTF-8 strlower function N+1 times, rather than 2N times. For lower-case() the performance increase is slight, but for stem it actually makes the feature usable in gnome-software. The opcode rewriting optimisation pass is kinda dumb in how it works (“lets try all combinations!”), but works with all of the registered methods, and makes all existing queries faster for almost free.

One common question I’ve had is if libxmlb is supposed to obsolete appstream-glib, and the answer is “it depends”. If you’re creating or building AppStream metadata, or performing any AppStream-specific validation then stick to the appstream-glib or appstream-builder libraries. If you just want to read AppStream metadata you can use either, but if you can stomach a binary blob of rewritten metadata stored somewhere, libxml is going to be a couple of orders of magnitude faster and use a ton less memory.

If you’re thinking of using libxml in your project send me an email and I’m happy to add more documentation where required. At the moment libxmlb does everything I need for fwupd and gnome-software and so apart from bugfixes I think it’s basically “done”, which should make my manager somewhat happier. Comments welcome.

Powered by WPeMatico

Share Button

Dan Walsh: Container Labeling

Share Button

An issue was recently raised on libpod, the github repo for Podman.

“container_t isn’t allowed to access container_var_lib_t”

Container policy is defined in the container-selinux package. By default containers run with the SELinux type “container_t” whether this is a container launched by just about any container engine like: podman, cri-o, docker, buildah, moby.  And most people who use SELinux with containers from container runtimes like runc, systemd-nspawn use it also.

By default container_t is allowed to read/execute labels under /usr, read generically labeled content in the hosts /etc directory (etc_t). 

The default label for content in /var/lib/docker and  /var/lib/containers is container_var_lib_t, This is not accessible by  containers, container_t,  whether they are running under podman, cri-o,  docker, buildah …  We specifically do not want containers to be able to read this content, because content that uses block devices like devicemapper and btrfs(I believe) is labeled container_var_lib_t, when the containers are not running.  

For overlay content we need to allow containers to read/execute the content, we use the type container_share_t, for this content.  So container_t is allowed to read/execute container_share_t files, but not write/modify them.

Content under /var/lib/containers/overlay* and /var/lib/docker/overlay* is labeled container_share_ by default.

$ grep overlay /etc/selinux/targeted/contexts/files/file_contexts
/var/lib/docker/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker-latest/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/docker-latest/overlay2(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/storage/overlay(/.*)? system_u:object_r:container_share_t:s0
/var/lib/containers/storage/overlay2(/.*)? system_u:object_r:container_share_t:s0

The label container_file_t is the only type that is writeable by containers.  container_file_t  is used when the overlay mount is created for the upper directory  of an image. It is also used for content mounted from devicemapper and btrfs.  

If you  volume mount in a directory into  a container and add a :z or :Z the container engines relabeled the content under the volumes to container_file_t.

Failure to read/write/execute content labeled container_var_lib_t is expected.  

When I see this type of AVC, I expect that this is either a volume mounted in  from /var/lib/container or /var/lib/docker or a mislabeled content  under and overlay directory like /var/lib/containers/storage/overlay.  

Solution:

To solve these, I usually recommend running 

restorecon -R -v /var/lib/containers
restorecon -R -v /var/lib/docker

Or if it is a volume mount to use the :z, or :Z/

Powered by WPeMatico

Share Button

Kiwi TCMS: Kiwi TCMS 6.2.1

Share Button

We’re happy to announce Kiwi TCMS version 6.2.1! This is a small release
that includes some improvements and bug-fixes. You can explore everything at
https://demo.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  24338088bf46    956.8 MB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.2

Improvements

Bug fixes

  • Fix InvalidQuery, field TestCase.default_tester cannot be both deferred and
    traversed using select_related at the same time. References
    Issue #346

Refactoring

  • Pylint fixes (Ivaylo Ivanov)
  • Remove JavaScript and Python functions in favor of existing JSON-RPC
  • Remove vendored-in js/lib/jquery.dataTables.js which is now replaced by
    the npm package datatables.net (required by Patternfly)

Translations

Misc

  • https://demo.kiwitcms.org is
    using a new SSL certificate with serial number
    46:78:80:EA:80:A4:FC:65:17:E4:59:EC:1D:C2:27:47
  • Version 6.2.1 has been published to
    PyPI to facilitate people who want
    to deploy Kiwi TCMS on Heroku. Important: PyPI package will be provided
    as a convenience for those who know what they are doing. Valid bugs and
    issues will be dealth with accordingly. As we do not deploy from a PyPI
    tarball we ask you to provide all the necessary
    details when reporting issues! If you have no idea what all of this means
    then stick to the official Docker images!

How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don’t forget to backup
before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will
always point to the latest available version! If you have to upgrade in steps,
e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:
edit docker-compose.yml to use kiwitcms/kiwi:
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

Powered by WPeMatico

Share Button

Fedora Magazine: Model the brain with the NEST simulator on Fedora

Share Button

The latest version of the NEST simulator is now available in Fedora as part of the NeuroFedora initiative.  NEST is a standard tool used by computational neuroscientists to make large scale computer models of the brain  that are needed to investigate among other things, how the brain processes information.

The NEST Eco-system

NEST offers a wide range of ready-to-use models, excellent documentation, and is supported by a thriving Open Source development community.

It provides a simple Python interface which makes it really easy to use.
In addition, it is designed so it can be run on both laptops and super computing clusters. That way it can be used to make models that range from a few neurons to those that include millions of neurons. For reference, the human brain contains 86 billion neurons on average!

It is possible to build such clusters using the Message Passing Interface (MPI), and clusters must be built separately to support it.

Install NEST

To make it easier for the users, we provide various variants of NEST. For example, to install a version that doesn’t use MPI for use on a workstation/laptop, one can use:

$ sudo dnf install nest python3-nest

Install NEST with MPI support

Fedora includes two implementations of MPI: MPICH and OpenMPI, and NEST has been built for both. For the MPICH version, one simply installs the mpich variants:

$ sudo dnf install nest-mpich python3-nest-mpich

For OpenMPI, the commands are similar:

$ sudo dnf install nest-openmpi python3-nest-openmpi

Finally the following command loads the MPI environment modules in order to activate the correct NEST variant:

$ module load mpi/mpich-x86_64  # mpi/openmpi-x86_64 for openmpi

Next, NEST uses some environment variables, which can set up by sourcing the nest_vars.sh file:

$ which nest_vars.sh
/usr/lib64/mpich/bin/nest_vars.sh
$ source /usr/lib64/mpich/bin/nest_vars.sh

Using NEST

After the installation and configuration of NEST, you can start using it inside a Python shell.

$ ipython3
In [1]: import nest
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:238 @ Network::create_rngs_] : Creating default RNGs
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:284 @ Network::create_grng_] : Creating new default global RNG

Oct 16 12:27:43 SLIStartup [Error]:
NEST_DOC_DIR is not usable:

Oct 16 12:27:43 SLIStartup [Error]:
Directory '/usr/lib64/mpich/share/doc/nest' does not exist.

Oct 16 12:27:43 SLIStartup [Error]:
I'm using the default: /usr/lib64/mpich/share/doc/nest

-- N E S T --
Copyright (C) 2004 The NEST Initiative

Version: v2.16.0
Built: Oct 5 2018 20:22:17

This program is provided AS IS and comes with
NO WARRANTY. See the file LICENSE for details.

Problems or suggestions?
Visit http://www.nest-simulator.org

Type 'nest.help()' to find out more about NEST.

In [2]: nest.version()
Out[2]: 'NEST 2.16.0'

NEST documentation is provided in a nest-doc package, and we also provide a README.fedora file in all nest packages: nest, nest-mpich, nest-openmpi that provide detailed instructions on using the different variants. The same file can also be found here.

If you run into issues, find a bug, or just want to chat, you can find the NeuroFedora SIG here.

Powered by WPeMatico

Share Button

Michael Catanzaro: The GNOME (and WebKitGTK+) Networking Stack

Share Button

WebKit currently has four network backends:

  • CoreFoundation (used by macOS and iOS, and thus Safari)
  • CFNet (used by iTunes on Windows… I think only iTunes?)
  • cURL (used by most Windows applications, also PlayStation)
  • libsoup (used by WebKitGTK+ and WPE WebKit)

One guess which of those we’re going to be talking about in this post. Yeah, of course, libsoup! If you’re not familiar with libsoup, it’s the GNOME HTTP library. Why is it called libsoup? Because before it was an HTTP library, it was a SOAP library. And apparently somebody thought that when Mexican people say “soap,” it often sounds like “soup,” and also thought that this was somehow both funny and a good basis for naming a software library. You can’t make this stuff up.

Anyway, libsoup is built on top of GIO’s sockets APIs. Did you know that GIO has Object wrappers for BSD sockets? Well it does. If you fancy lower-level APIs, create a GSocket and have a field day with it. Want something a bit more convenient? Use GSocketClient to create a GSocketConnection connected to a GNetworkAddress. Pretty straightforward. Everything parallels normal BSD sockets, but the API is nice and modern and GObject, and that’s really all there is to know about it. So when you point WebKitGTK+ at an HTTP address, libsoup is using those APIs behind the scenes to handle connection establishment. (We’re glossing over details like “actually implementing HTTP” here. Trust me, libsoup does that too.)

Things get more fun when you want to load an HTTPS address, since we have to add TLS to the picture, and we can’t have TLS code in GIO or GLib due to this little thing called “copyright law.” See, there are basically three major libraries used to implement TLS on Linux, and they all have problems:

  • OpenSSL is by far the most popular, but it’s, hm, shall we say technically non-spectacular. There are forks, but the forks have problems too (ask me about BoringSSL!), so forget about them. The copyright problem here is that the OpenSSL license is incompatible with the GPL. (Boring details: Red Hat waves away this problem by declaring OpenSSL a system library qualifying for the GPL’s system library exception. Debian has declared the opposite, so Red Hat’s choice doesn’t gain you anything if you care about Debian users. The OpenSSL developers are trying to relicense to the Apache license to fix this, but this process is taking forever, and the Apache license is still incompatible with GPLv2, so this would make it impossible to use GPLv2+ software except under the terms of GPLv3+. Yada yada details.) So if you are writing a library that needs to be used by GPL applications, like say GLib or libsoup or WebKit, then it would behoove you to not use OpenSSL.
  • GnuTLS is my favorite from a technical standpoint. Its license is LGPLv2+, which is unproblematic everywhere, but some of its dependencies are licensed LGPLv3+, and that’s uncomfortable for many embedded systems vendors, since LGPLv3+ contains some provisions that make it difficult to deny you your freedom to modify the LGPLv3+ software. So if you rely on embedded systems vendors to fund the development of your library, like say libsoup or WebKit, then you’re really going to want to avoid GnuTLS.
  • NSS is used by Firefox. I don’t know as much about it, because it’s not as popular. I get the impression that it’s more designed for the needs of Firefox than as a Linux system library, but it’s available, and it works, and it has no license problems.

So naturally GLib uses NSS to avoid the license issues of OpenSSL and GnuTLS, right?

Haha no, it uses a dynamically-loadable extension point system to allow you to pick your choice of OpenSSL or GnuTLS! (Support for NSS was started but never finished.) This is OK because embedded systems vendors don’t use GPL applications and have no problems with OpenSSL, while desktop Linux users don’t produce tivoized embedded systems and have no problems with LGPLv3. So if you’re using desktop Linux and point WebKitGTK+ at an HTTPS address, then GLib is going to load a GIO extension point called glib-networking, which implements all of GIO’s TLS APIs — notably GTlsConnection and GTlsCertificate — using GnuTLS. But if you’re building an embedded system, you simply don’t build or install glib-networking, and instead build a different GIO extension point called glib-openssl, and libsoup will create GTlsConnection and GTlsCertificate objects based on OpenSSL instead. Nice! And if you’re Centricular and you’re building GStreamer for Windows, you can use yet another GIO extension point, glib-schannel, for your native Windows TLS goodness, all hidden behind GTlsConnection so that GStreamer (or whatever application you’re writing) doesn’t have to know about SChannel or OpenSSL or GnuTLS or any of that sad complexity.

Now you know why the TLS extension point system exists in GIO. Software licenses! And you should not be surprised to learn that direct use of any of these crypto libraries is banned in libsoup and WebKit: we have to cater to both embedded system developers and to GPL-licensed applications. All TLS library use is hidden behind the GTlsConnection API, which is really quite nice to use because it inherits from GIOStream. You ask for a TLS connection, have it handed to you, and then read and write to it without having to deal with any of the crypto details.

As a recap, the layering here is: WebKit -> libsoup -> GIO (GLib) -> glib-networking (or glib-openssl or glib-schannel).

So when Epiphany fails to load a webpage, and you’re looking at a TLS-related error, glib-networking is probably to blame. If it’s an HTTP-related error, the fault most likely lies in libsoup. Same for any other GNOME applications that are having connectivity troubles: they all use the same network stack. And there you have it!

P.S. The glib-openssl maintainers are helping merge glib-openssl into glib-networking, such that glib-networking will offer a choice of GnuTLS or OpenSSL and obsoleting glib-openssl. This is still a work in progress. glib-schannel will be next!

P.S.S. libcurl also gives you multiple choices of TLS backend, but makes you choose which at build time, whereas with GIO extension points it’s actually possible to choose at runtime from the selection of installed extension points. The libcurl approach is fine in theory, but creates some weird problems, e.g. different backends with different bugs are used on different distributions. On Fedora, it used to use NSS, but now uses OpenSSL, which is fine for Fedora, but would be a license problem elsewhere. Debian actually builds several different backends and gives you a choice, unlike everywhere else. I digress.

Powered by WPeMatico

Share Button

Model the brain with the NEST simulator on Fedora

Share Button

The latest version of the NEST simulator is now available in Fedora as part of the NeuroFedora initiative.  NEST is a standard tool used by computational neuroscientists to make large scale computer models of the brain  that are needed to investigate among other things, how the brain processes information.

The NEST Eco-system

NEST offers a wide range of ready-to-use models, excellent documentation, and is supported by a thriving Open Source development community.

It provides a simple Python interface which makes it really easy to use.
In addition, it is designed so it can be run on both laptops and super computing clusters. That way it can be used to make models that range from a few neurons to those that include millions of neurons. For reference, the human brain contains 86 billion neurons on average!

It is possible to build such clusters using the Message Passing Interface (MPI), and clusters must be built separately to support it.

Install NEST

To make it easier for the users, we provide various variants of NEST. For example, to install a version that doesn’t use MPI for use on a workstation/laptop, one can use:

$ sudo dnf install nest python3-nest

Install NEST with MPI support

Fedora includes two implementations of MPI: MPICH and OpenMPI, and NEST has been built for both. For the MPICH version, one simply installs the mpich variants:

$ sudo dnf install nest-mpich python3-nest-mpich

For OpenMPI, the commands are similar:

$ sudo dnf install nest-openmpi python3-nest-openmpi

Finally the following command loads the MPI environment modules in order to activate the correct NEST variant:

$ module load mpi/mpich-x86_64  # mpi/openmpi-x86_64 for openmpi

Next, NEST uses some environment variables, which can set up by sourcing the nest_vars.sh file:

$ which nest_vars.sh
/usr/lib64/mpich/bin/nest_vars.sh
$ source /usr/lib64/mpich/bin/nest_vars.sh

Using NEST

After the installation and configuration of NEST, you can start using it inside a Python shell.

$ ipython3
In [1]: import nest
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:238 @ Network::create_rngs_] : Creating default RNGs
[INFO] [2018.10.16 12:27:43 /builddir/build/BUILD/nest-simulator-2.16.0-mpich/nestkernel/rng_manager.cpp:284 @ Network::create_grng_] : Creating new default global RNG

Oct 16 12:27:43 SLIStartup [Error]:
NEST_DOC_DIR is not usable:

Oct 16 12:27:43 SLIStartup [Error]:
Directory '/usr/lib64/mpich/share/doc/nest' does not exist.

Oct 16 12:27:43 SLIStartup [Error]:
I'm using the default: /usr/lib64/mpich/share/doc/nest

-- N E S T --
Copyright (C) 2004 The NEST Initiative

Version: v2.16.0
Built: Oct 5 2018 20:22:17

This program is provided AS IS and comes with
NO WARRANTY. See the file LICENSE for details.

Problems or suggestions?
Visit http://www.nest-simulator.org

Type 'nest.help()' to find out more about NEST.

In [2]: nest.version()
Out[2]: 'NEST 2.16.0'

NEST documentation is provided in a nest-doc package, and we also provide a README.fedora file in all nest packages: nest, nest-mpich, nest-openmpi that provide detailed instructions on using the different variants. The same file can also be found here.

If you run into issues, find a bug, or just want to chat, you can find the NeuroFedora SIG here.

Powered by WPeMatico

Share Button

Open Source Security Podcast: Episode 122 – What will Apple's T2 chip mean for the rest of us?

Share Button

Josh and Kurt talk about Apple’s new T2 security chip. It’s not open source but we expect it to change the security landscape in the coming years.

Show Notes

Powered by WPeMatico

Share Button

Ankur Sinha “FranciscoD”: NeuroFedora update: week 45

Share Button

In week 45:

All new packages must go through Fedora‘s QA (testing) process before being
made available to end users in the repositories. You can help test these
packages following the instructions here.

A lot of the software we worked on this week was related to neuro-imaging, and
fortunately, a lot of it was Python based which is usually quite simple to
build. The coming week, though, I intend to work on NEURON. Unfortunately,
NEURON isn’t the easiest to build:

  • It depends on iv, which bundles a really old version of libtiff. I’ve filed
    a ticket here about
    this, but have not had the time to port the code to the newest version of
    libtiff.
  • NEURON bundles Random123, which was relatively easy to remove. However,
    NEURON also bundles a really old version of the SUNDIALS libraries, and
    updating the code to use the latest versions is not straightforward. I have
    filed an issue about it here now. This is based on
    my initial investigations into building NEURON. So there’s a chance that
    more work will need to be done once these issues are solved.

There is a lot of software available in NeuroFedora already. You can see the
complete list here on Fedora SCM. Software that is currently
being worked on is listed on our Pagure project instance. If you use software that
is not on our list, please suggest it to us using the suggestion form.

Feedback is always welcome. You can get in touch with us here.

The Fedora community: enabling Open Science

While the NeuroFedora SIG is actively working on these packages, it would not
be possible without our friends in the Fedora community that have helped with
the various stages of the package maintenance pipeline.

We’re grateful to the various upstreams that we’re bothering with issues, and
everyone in the Fedora community (including people I may have missed) for
enabling us to further Open Science via Fedora.

Powered by WPeMatico

Share Button

Fedora Community Blog: FPgM report: 2018-45

Share Button

Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week.

I’ve set up weekly office hours in #fedora-meeting-1 (note a change in channel). Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Announcements

Help wanted

Upcoming meetings

Fedora 29 Status

  • Nominations for the Fedora 29 election cycle open on Wednesday 14 November

Fedora 30 Status

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.

Changes

Announced

Submitted to FESCo

The post FPgM report: 2018-45 appeared first on Fedora Community Blog.

Powered by WPeMatico

Share Button

RHEL Developer: Why you should care about RISC-V

Share Button

If you haven’t heard about the RISC-V (pronounced “risk five”) processor, it’s an open-source (open-hardware, open-design) processor core created by the University of Berkeley. It exists in 32-bit, 64-bit, and 128-bit variants, although only 32- and 64-bit designs exist in practice. The news is full of stories about major hardware manufacturers (Western Digital, NVidia) looking at or choosing RISC-V cores for their product.

But why should you care? You can’t just go to the local electronics boutique and buy a RISC-V laptop or blade server. RISC-V commodity hardware is either scarce or expensive. It’s all still early in its lifespan and development, not yet ready for enterprise tasks. Yet it’s still something that the average professional should be aware of, for a number of reasons.

By now everyone has heard about the Meltdown and Spectre issues, and related “bugs” users have been finding in Intel and AMD processors. This blog is not about how hard CPU design is – it’s hard. Even harder than you realize. The fear created by these bugs was not that there was a problem in the design, but that users of these chips had no insight into how these “black boxes” worked, no way to review code that was outside their control, and no way to audit these processors for other security issues. We’re at the mercy of the manufacturer to assure us there are no more bugs left (ha!).

The advantage of an open core here is that a company can audit the internal workings of a processor, at least in theory. If a bug is found by one chip manufacturer using a RISC-V core, the fix can be shared with other manufacturers. And certainly, if there are bugs to be exploited, the black hats and white hats will be able to find them (and fix them) much faster and sooner.

And what if you do want to try a RISC-V system today? Support for 64-bit RISC-V cores with common extensions (MAFD – multiply/divide, atomics, float, and double – aka the ‘G’ set) was added to the GNU C Library (glibc) in version 2.27, which means (for example) Fedora 28 contains RISC-V support. Bootable images are available, which run in a qemu emulator (standard in Fedora) or on real hardware (such as the SiFive HiFive Unleashed board).

A team of volunteers (of which I am one) is currently working on building the latest Fedora packages for RISC-V on a large number of emulators and a small number of hardware systems, such as this one (mine):

HiFive1 Board

An early access RISC-V development system. Upper right is the HiFive board. Bottom is a VC707 board which provides a PCIe bridge. Middle left is a PCIe riser board. At the top is a commodity PCIe SSD card. Connections on the right: USB serial console, ethernet, power. Additional mess is optional, and at the discretion of the desk owner.

But are there down sides to choosing an open core? Well, there are considerations that anyone should be aware of when choosing any core. Here are a few:

  • More flexibility for you. If you need to integrate a core into a custom ASIC for your hardware, with custom peripherals, an open core gives you a good base core to work from. However…
  • More work for you. A core is just a core, you need to add everything (serial ports, DDR interfaces) yourself.
  • A wider range of options and configurations. You get to decide which extensions and peripherals your core will have, which minimizes space and cost of each implementation. However…
  • A fragmented ecosystem is possible. If you customize your core too much, you might need to customize the tools to match, and sharing code and designs becomes more complicated. Distributions like Fedora standardize on a set of common extensions that manufacturers can include to ensure compatibility.
  • An open design means anyone can audit the design for security. However…
  • An open design means everyone must audit the design for security. Perhaps an ecosystem for audits and auditing will arise.
  • An open design can be cheaper on a per-core basis, due to the lack of licensing costs and freely available tooling. However…
  • An open design can be more expensive due to the lack of a robust ecosystem to drive engineering costs down.

So, like all things engineering… YMMV.

In summary… any time something new comes around, in this case a new processor core and a new way of thinking about the intellectual property therein, it offers users more choices about where they want to put their efforts, resources, and risks. For me, having (and supporting) a new architecture give me an opportunity to hone my skills as well as revisit old decisions about how platforms can be used portably.

Resources

Share

The post Why you should care about RISC-V appeared first on RHD Blog.

Powered by WPeMatico

Share Button