Fedora Magazine: 4 cool new projects to try in COPR for April 2019

Share Button

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.


Joplin is a note-taking and to-do app. Notes are written in the Markdown format, and organized by sorting them into various notebooks and using tags.
Joplin can import notes from any Markdown source or exported from Evernote. In addition to the desktop app, there’s an Android version with the ability to synchronize notes between them — using Nextcloud, Dropbox or other cloud services. Finally, there’s a browser extension for Chrome and Firefox to save web pages and screenshots.

Installation instructions

The repo currently provides Joplin for Fedora 29 and 30, and for EPEL 7. To install Joplin, use these commands with sudo:

sudo dnf copr enable taw/joplin
sudo dnf install joplin


Fzy is a command-line utility for fuzzy string searching. It reads from a standard input and sorts the lines based on what is most likely the sought after text, and then prints the selected line. In addition to command-line, fzy can be also used within vim. You can try fzy in this online demo.

Installation instructions

The repo currently provides fzy for Fedora 29, 30, and Rawhide, and other distributions. To install fzy, use these commands:

sudo dnf copr enable lehrenfried/fzy
sudo dnf install fzy


Fondo is a program for browsing many photographs from the unsplash.com website. It has a simple interface that allows you to look for pictures of one of several themes, or all of them at once. You can then set a found picture as a wallpaper with a single click, or share it.

Installation instructions

The repo currently provides Fondo for Fedora 29, 30, and Rawhide. To install Fondo, use these commands:

sudo dnf copr enable atim/fondo
sudo dnf install fondo


YACReader is a digital comic book reader that supports many comics and image formats, such as cbz, cbr, pdf and others. YACReader keeps track of reading progress, and can download comics’ information from Comic Vine. It also comes with a YACReader Library for organizing and browsing your comic book collection.

Installation instructions

The repo currently provides YACReader for Fedora 29, 30, and Rawhide. To install YACReader, use these commands:

sudo dnf copr enable atim/yacreader
sudo dnf install yacreader

Powered by WPeMatico

Share Button

Remi Collet: PHP version 7.2.18RC1 and 7.3.5RC1

Share Button

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.5RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

RPM of PHP version 7.2.18RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.


emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php*

Notice: version 7.3.5RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

Powered by WPeMatico

Share Button

Fedora Community Blog: FPgM report: 2019-16

Share Button

Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 30 final freeze is in effect. The Go/No-Go and release readiness meetings will be held on Thursday.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else

Announcements and help wanted

Help wanted

Meetings and test days

Fedora 30 Status

Final freeze is in effect. The Fedora 30 GA is scheduled for 30 April 2019.


  • 2019-04-30 — Final preferred target
  • 2019-05-07 — Final target date #1

Blocker bugs

Bug ID Blocker status Component Bug Status
1693409 Accepted (Final) gdm NEW
1690429 Accepted (Final) gnome-shell ON_QA
1688462 Accepted (Final) libdnf POST
1697591 Proposed (Final) xorg-x11-server ASSIGNED
1701279 Proposed (Final) appstream-data ON_QA
1696270 Proposed (Final) gnome-shell ON_QA
89216 Proposed (Final) openssh ASSIGNED

Fedora 31 Status



Submitted to FESCo

The post FPgM report: 2019-16 appeared first on Fedora Community Blog.

Powered by WPeMatico

Share Button

Tony Asleson: DBUS Server side library wish list

Share Button

Ideally it would be great if a DBUS server side library provided

  1. Fully implements the functionality needed for common interfaces (Properties, ObjectManager, Introspectable) in a sane and easy way and doesn’t require you to manually supply the interface XML.
  2. Allows you to register a number of objects simultaneously, so if you have circular references etc.  This avoids race conditions on client.
  3. Ability to auto generate signals when object state changes and represent the state of the object separately for each interface on each object.
  4. Freeze/Thaw on objects or the whole model to minimize number of signals, but without requiring the user to manually code stuff up for signals.
  5. Configurable process/thread model and thread safe.
  6. Incrementally and dynamically add/remove an interface to an object without destroying the object and re-creating and while incrementally adding/removing the state as well.
  7. Handle object path construction issues, like having an object that needs an object path to another object that doesn’t yet exist.  This is alleviated of you have #8.
  8. Ability to create one or more objects without actually registering them with the service, so you can handle issues like #7 easier, especially when coupled with #2, directly needed for #2.  Thus you create 1 or more objects and register them together.
  9. Doesn’t require the use of code generating tools.
  10. Allow you to have multiple paths/name spaces which refer to the same objects. This would be useful for services that implement functionality in other services without requiring the clients to change.
  11. Allows you to remove a dbus object from the model while you are processing a method on it.  eg. client is calling a remove method on the object it wants to remove.

Powered by WPeMatico

Share Button

Fedora Community Blog: Stories from the amazing world of release-monitoring.org #4

Share Button

The Future chamber was lit by hundreds of candles with strange symbols glowing on the walls. In the center of the chamber stood I, wearing the ceremonial robe and preparing for the task that lies before me.

Somebody opened the doors, I turned around to see, who that could be. “Oh, a pleasant surprise, traveler. Stand near the door and watch this, you will love it.” I focused back to my thoughts and added. “Today I will show you the future that is waiting for this realm, but first we need to see current situation to understand the changes.”

Current situation

I started to cite the incantations and above my head the image started to take some form. “This is the manuscript to help you understand, how the release-monitoring.org is working now. It’s simplified, so you don’t be bothered by details.”

Manuscript of the current situation

“As you can see, first thing that will happen is to let Anitya know that there is project it should track. This project is added either by Abstract Positive Intuition (API) or by some outside entity (User). “

“When the project is added to Anitya. We start to send messengers to every project and wait, if there is something new happening (cron job is checking every project for new version). If we see something new, we will sent the message using magical mirror (Fedora messaging).”

“And here is when the-new-hotness comes into play. It needs to commune with Great Oraculum (scm-requests pagure repository) and see if Ever Vigilant Guard (package maintainer) wants to be notified about the news. In case he wants to be notified, we send a messenger to land of Bugzilla. Optionally we send another to realm of Koji, if Ever Vigilant Guard wants this (create a scratch build in Koji).”

“This is simplified description about the current situation of release-monitoring.org. There is also plenty of additional smaller or larger issues that needs to be addressed, but this is all for now. If you want to know more about the other issues, traveler, please visit Bugcronomicon of Anitya and the-new-hotness.”

Near future

I broke my concentration and the image started to fade away. I started to collect my power again, this time to reveal more than just a current state of things. My mind must flow through the currents of time. “Now I will show you the future that is lying in front of us.” I concentrated again and new image started to take shape above my head.

Manuscript of the near future

“What you can see here is the near future. Something that is either being worked on or is already planned. So what is the difference from current situation. Let’s see.”

“There will be new option for adding a new project. Now we will welcome the messengers from the far away realm of libraries.io. This new connection was requested by the mages from the realm of copr.”

“The next change is to stop sending messengers periodically on every project, instead we will use queue (replace cron job by service) that will first send the messengers to the projects, which denied access to previous messengers (rate limiting).”

“We also want to establish new connection between the-new-hotness and Flathub to help them in their journey. We want to use Abstract Positive Intuition (API) of the realm of GitHub to notify them about news related to their projects.”

“Great Oraculum (scm-requests pagure repository) will be no longer bothered with our requests and we will ask Ever Vigilant Guards (package maintainers) directly. This will allow them to easily choose, if they want to be notified or not. We can thank to my fellow mage Pierre-Yves Chibon for this change.”

“Bugzilla will no longer be used and we will instead use connection with the realm of Pagure, more specifically with the large island in Pagure known as dist-git or Fedora package sources. You can read more about this in previous entry of my journal.”

“There will also be other changes that are not visible on the first sight. Most important of these changes is to look only for the news that are really new. Right now our messengers are always collecting everything in the archive and not all of them are really new. So we will add a few small changes to prevent this. First is to remember where the last news was delivered and check against this date every time a messenger is sent (check HTTP header field If-modified-since). The situation is slightly different in case of the realm of GitHub, where we must remember some identifier (tag id) instead of date.”

“Other not visible change is mechanism to prevent addition of duplicate projects. We will show the outside entity (user) the projects that are similar, so he can check if he isn’t adding project that is already in Anitya. Another change in this will be some normalization of the project (Use normalized homepage as ecosystem instead of the one user added).”

“This will be everything I will show you from the near future and now I need some rest, before I will go further.” The image above my head slowly fade away as I started to think about something else. I looked at the traveler, if he is still by the doors. Traveler was still standing there waiting for any new information that I could reveal, but for now I must rest.

Far away future

When I returned from the rest and entered the Future chamber again, traveler already stood there, waiting for me. I get to my position in the center of the room and started to concentrate again. “Now we will look further in the time. We will see, how the release-monitoring.org could look one day. This is not a clear future, so there could be changes that will prevent this vision to came true, but you will at least see what I’m talking about.” Another image started to form above my head.

Manuscript of the far future

“As you can see the far future is not that different from the near one, but don’t be fooled. There is more than the eye can meet. First I will talk about the visible change. When there will be new project added by Ever Vigilant Guard (package maintainer) to Fedora universe and the Ever Vigilant Guard will want to be notified about it, it will be automatically added to Anitya. No need to do this manually anymore. There could even be something similar for Flathub.”

“What about the other things that can’t be seen in the manuscript? One of them is allowing to add project simply by giving address of it to Anitya (reading all metadata from URL). This could be really a life changer for plenty of people, who work with Anitya.”

“There will also be a statistics collector, because every mage loves information (new page containing statistics about the recent runs with information about failed, ratelimited or succeeded checks) and we will use nice magic images (graphs) to show these to others.”

“We also allow Ever Vigilant Guard (package maintainer) to show us where the news should be delivered in land of Pagure (allow to map version prefix to branch for creating PR).”

“In the future Anitya will not only send latest news when sending message through magical mirror (Fedora messaging), but all the news received from the latest messenger (Fedora messaging message will contain all new versions retrieved in latest check). Together with previous change this will be really helpful to every Ever Vigilant Guard.”

“To make project more simple to read, we will introduce the division of news to categories specified by outside entities (users could create a version stream for different version prefixes like ‘3.’ or ‘4.’ and show them as tabs on the project page).”

“This will be all I will show you today, I hope you are excited as I am about the future that lies before us, traveler. Things will probably change before we get there, but that is life. Ever changing shapeshifter.” I slowly focused my mind away from the future and started to walk away from Future chamber together with the traveler.

Post scriptum

This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

The post Stories from the amazing world of release-monitoring.org #4 appeared first on Fedora Community Blog.

Powered by WPeMatico

Share Button

Sirko Kemter: Khmer Translation Sprint 3

Share Button

Powered by WPeMatico

Share Button

Mark J. Wielaard: Valgrind 3.15.0 with improved DHAT heap profiler

Share Button

Julian Seward released valgrind 3.15.0 which updates support for existing platforms and adds a major overhaul of the DHAT heap profiler.  There are, as ever, many refinements and bug fixes.  The release notes give more details.

Nicholas Nethercote used the old experimental DHAT tool a lot while profiling the Rust compiler and then decided to write and contribute A better DHAT (which contains a screenshot of the the new graphical viewer).


  • The XTree Massif output format now makes use of the information obtained when specifying --read-inline-info=yes.
  • amd64 (x86_64): the RDRAND and F16C insn set extensions are now supported.



  • DHAT been thoroughly overhauled, improved, and given a GUI.  As a result, it has been promoted from an experimental tool to a regular tool.  Run it with --tool=dhat instead of --tool=exp-dhat.
  • DHAT now prints only minimal data when the program ends, instead writing the bulk of the profiling data to a file.  As a result, the --show-top-n and --sort-by options have been removed.
  • Profile results can be viewed with the new viewer, dh_view.html.  When a run ends, a short message is printed, explaining how to view the result.
  • See the documentation for more details.


  • cg_annotate has a new option, --show-percs, which prints percentages next to all event counts.


  • callgrind_annotate has a new option, --show-percs, which prints percentages next to all event counts.
  • callgrind_annotate now inserts commas in call counts, and sort the caller/callee lists in the call tree.


  • The default value for --read-inline-info is now yes on Linux/Android/Solaris. It is still no on other OS.


  • The option --xtree-leak=yes (to output leak result in xtree format) automatically activates the option --show-leak-kinds=all, as xtree visualisation tools such as kcachegrind can in any case select what kind of leak to visualise.
  • There has been further work to avoid false positives.  In particular, integer equality on partially defined inputs (C == and !=) is now handled better.


  • The new option --show-error-list=no|yes displays, at the end of the run, the list of detected errors and the used suppressions.  Prior to this change, showing this information could only be done by specifying -v -v, but that also produced a lot of other possibly-non-useful messages.  The option -s is equivalent to --show-error-list=yes.

Powered by WPeMatico

Share Button

Rafał Lużyński: New Japanese era

Share Button

On 1 May 2019 a new era of the Japanese calendar will begin. Fedora is ready for this change.

What This Is All About

1 December 2017 the Emperor of Japan Akihito officially announced that he would abdicate on 30 April 2019. From 1 May his successor Naruhito will rule which will also begin a new era in the Japanese calendar. This is rather unusual event because so far emperors ruled until their death. Obviously, this made the moment of the era change difficult to predict. The emperor’s decision will help the country prepare for the change.

Although the Gregorian calendar (the same as in many countries around the world) is known and used in Japan, the traditional Japanese calendar is also used with the years counted from the enthronement of an emperor. Each period of an emperor’s rule is called an era and has its own proper name. For example, the current era is named Heisei (平成), at the time of writing we have 31 year of Heisei era.

On 1 April, one month before the beginning of the new era its name was announced. It will be named Reiwa (令和). As we know it we can adapt computers and other devices displaying dates automatically.

How To Test

In Unix systems dates are formatted by strftime() function, the easiest way to test it is to use the date command. Launched from the command line it displays the current date and time in our own language and using the default format:

The change applies to the Japanese calendar which is available only in Japanese locales:

There is still nothing interesting because the default date format in Japanese locale displays the Gregorian calendar. A custom date format must be used:

So here we have something interesting: 31 year of Heisei era has been displayed.

So far we are in April, in the old Japanese era. How to display a date from the new era? Let’s use the commands which will display a date one month ahead:

If we still can see the 31 year of Heisei era then this is a bug. An updated system should display:

Why the command LC_ALL=ja_JP.utf8 date +%EY -d "1 month" has not displayed any number? The first year of an era usually is not written with a number 1 but described with a word gannen (元年) which means “the initial year”.

More Explanations

Let’s explain what those magical symbols like +%Ec mean. The character "+" means that the following string is a date format which has to be used. The character "%" marks the beginning of the format, the character "E" means that the alternative calendar should be used. Here is a summary of the format specifiers which have been used for tests:

%EC  the name of an era (e.g., Heisei, Reiwa)
%EY  year number including the era name
%Ey  year number in the Japanese calendar without an era name
%Ec  full date and time in the Japanese calendar
%Ex  full date (without the time) in the Japanese calendar

Compare this with:

%C  century number minus 1, or the initial digits of the year (currently 20—yes, not 21)
%Y  year number in the Gregorian calendar (2019)
%y  year number abbreviated to 2 digits (19)
%c  full date and time
%x  full date (without the time)

Please note that the "E" character in English locales will cause no effect, it will be ignored.

The -d switch means that date command should display a different date than the current one, for example -d "1 month" means one month ahead from now.

How To Update

In order that the calendar support the new Japanese era the glibc library must be updated to the version at least:

Fedora 28  glibc-2.27-38
Fedora 29  glibc-2.28-27
Fedora 30  glibc-2.29-9
Fedora Rawhide  glibc-2.29.9000-10
RHEL 7/CentOS 7  glibc-2.17-260.el7_6.4

Fedora Rawhide is a base for the future Fedora 31 release (November 2019) but before this happens a new version glibc 2.30 will be released which will support the new Japanese era from the beginning.

Powered by WPeMatico

Share Button

Peter Czanik: What syslog-ng relays are good for

Share Button

While there are some users who run syslog-ng as a stand-alone application, the main strength of syslog-ng is central log collection. In this case the central syslog-ng instance is called the server, while the instances sending log messages to the central server are called the clients. There is a (somewhat lesser known) third type of instance called the relay, too. The relay collects log messages via the network and forwards them to one or more remote destinations after processing (but without writing them onto the disk for storage).A relay can be used for many different use cases. We will discuss a few typical examples below.

Note that the syslog-ng application has an open source edition (OSE) and a premium edition (PE). Most of the information below applies to both editions. Some features are only available in syslog-ng PE and some scenarios need additional licenses when implemented using syslog-ng PE.

UDP-only source devices

Typically, most network devices send log messages over UDP only. Even though some of them support TCP-based logging, vendors recommend not to use it (as in many cases the TCP logging implementation is extremely buggy). UDP does not guarantee that all UDP packets will be delivered, so it is a weak point of the system. To ensure at least a best effort level of reliability, it is recommended to deploy a relay on the network, closeto these source devices. With the least possible (and, more importantly, the most reliable) hops between the source and the relay, the risk of losing UDP packets can be minimized. Once the packet arrives at the relay, we can ensure the messages are delivered to the central server in a reliable manner, based on TCP/TLS and ALTP (syslog-ng PE only: Advanced Log Transfer Protocol).

Too many source devices

Depending on the hardware and configuration, an average syslog-ng instance can usually handle the following number of concurrent connections:

1. If the maximum message rate is lower than 200,000 messages per second:

◦ maximum ca. 5,000 TCP connections

◦ maximum ca. 1,000 TLS connections

◦ maximum ca. 1,000 ALTP connections

2. If the message rate is higher than 200,000 messages per second, always contact One Identity.

As a result, if you have more source devices, it is required to deploy a relay machine at least per 5,000 sources and batch up all the logs into a single TCP connection that connects the relay to the server. If TLS or ALTP is used, relays should be deployed per 1,000 source devices.

Collecting logs from remote sites (especially over public WAN)

It is quite common that companies need to collect log messages from geographically remote sites (sometimes in global distance), and sometimes over public WAN. In this case it is recommended to install a relay nodeper each remote site at least. The relay can be the last outgoing hop for all the messages of the remote site, which has several benefits:

  • Maintenance: you only need to change the configuration of the relayif you want to re-route the logs of some/all sources of the remote site. Plusou don’t need to change each source’s configuration one by one.

  • Security: If you trust your internal network, it is not necessary to hold encrypted connections within the LAN of the remote site, as the messages can get to the relay without encryption. Naturally, messages should be sent in an encrypted way over the public WAN, and it is enough the hold only a single TCP/TLS connection between the sites (that is,between the remote relay and the central server). This eliminates the wasting of resources as holding several TLS connections directly from the clients is more costly than holding a single connection from the relay.

  • Reliability: It is possible to setup a ‘main’ disk-buffer on the relay. This main disk-buffer is only responsible for buffering all the logs of the remote site if the central syslog-ng server is temporarily unavailable. Of course, it is easier to maintain this single large main disk-buffer instead of setting disk-buffers on individual client machines.

Separation / distribution / balancing of message processing tasks

Most Linux applications have their own human readable, but difficult to handle, log messages. Without parsing and normalization, it is difficult to alert and report on these log messages. Many syslog-ng users utilize the message parsing tools of syslog-ng to normalize their different log messages. Just like normalization, filtering can also be resource-heavy, depending on what the filtering is based on. In this case, it might be inefficient to perform all the message processing tasks on the server (which can result in decreased overall performance). It is a typical setup todeploy relays in front of the central server operating as a receiver front-end. Most resource-heavy tasks (for example, parsing, filtering, etc) are performed on this receiver layer. As all resource-heavy tasks are performed on the relay, the central server behind it only needs to get the messages from the relay and write them into the final text-based or tamper-proof format (logstore, PE only). As you have the means to run more relays, you can balance the resource-heavy tasks between more relays and a single server behind them can still be fast enough to write all the messages onto the disk.

Acting as a relay depends on the functionality. Namely, a relay doesn’t have to be a dedicated relay machine at all. In fact, it can be one of the clients with a relay configuration in terms of log collection. On the other hand, in a robust log collection infrastructure the relays have their own purpose, therefore it is recommended to run dedicated relay machines in such cases.

When it comes to the commercial PE version of syslog-ng, the relays are included in the price (at least until the licensed LSH number). Hence, you can run several parallel relays to ensure horizontal redundancy. Let’s say each of the relays has the very same configuration and if one goes down, an other relay can take over processing. Distribution of the logs can be done by the built-in client-side failover functionality and by a general load-balancer as well. The latter is also used to serve N+1 redundant relay deployments (in this case, switching from one relay to an other relay is done not only due to an outage, but due to real load-balancing purposes, too).

What syslog-ng relays are NOT good for

The purpose of the relay is to buffer the logs for short term (for example, a few minutes or a few hours long, depending on the log volume) outages. It is not designed to buffer logs generated by the sources during a very long (for example, up to a few days long) server or connection outage.

If you expect extended outages, we recommend that you deploy servers instead of relays. There are many deployments where long term storage and archiving are performed on the central syslog-ng server, but relays also do short-term log storage. From the syslog-ng PE point of view, these are servers, and thus need separate server licenses.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/balabit/syslog-ng. On Twitter, I am available as @PCzanik.

Powered by WPeMatico

Share Button

Fedora Magazine: Managing RAID arrays with mdadm

Share Button

Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software RAID arrays on your Linux PC. This article outlines the basics you need to get started with it.

The following five commands allow you to make use of mdadm’s most basic features:

  1. Create a RAID array:
    # mdadm –create /dev/md/test –homehost=any –metadata=1.0 –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1
  2. Assemble (and start) a RAID array:
    # mdadm –assemble /dev/md/test /dev/sda1 /dev/sdb1
  3. Stop a RAID array:
    # mdadm –stop /dev/md/test
  4. Delete a RAID array:
    # mdadm –zero-superblock /dev/sda1 /dev/sdb1
  5. Check the status of all assembled RAID arrays:
    # cat /proc/mdstat

Notes on features

mdadm --create

The create command shown above includes the following four parameters in addition to the create parameter itself and the device names:

  1. –homehost:
    By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name any is reserved and disables the homehost restriction.
  2. –metadata:
    mdadm reserves a small portion of each RAID device to store information about the RAID array itself. The metadata parameter specifies the format and location of the information. The value 1.0 indicates to use version-1 formatting and store the metadata at the end of the device.
  3. –level:
    The level parameter specifies how the data should be distributed among the underlying devices. Level 1 indicates each device should contain a complete copy of all the data. This level is also known as disk mirroring.
  4. –raid-devices:
    The raid-devices parameter specifies the number of devices that will be used to create the RAID array.

By using level=1 (mirroring) in combination with metadata=1.0 (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs read-only access to the underlying device before mdadm is available. For example, the UEFI firmware in a computer may need to read the bootloader from the ESP before mdadm is started.

mdadm --assemble

The assemble command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:

# mdadm --assemble --run /dev/md/test /dev/sda1

Other important notes

Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:

# mdadm --zero-superblock /dev/sdb1
# mdadm --assemble --run /dev/md/test /dev/sda1
# mdadm /dev/md/test --add /dev/sdb1

These commands completely overwrite the contents of sdb1 with the contents of sda1.

To specify any RAID arrays to automatically activate when your computer starts, create an /etc/mdadm.conf configuration file.

For the most up-to-date and detailed information, check the man pages:

$ man mdadm 
$ man mdadm.conf

The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!

Powered by WPeMatico

Share Button