What is secure erase?

Embedded devices today store a wide variety of data. You would be forgiven for thinking that when data is removed from such a device, it’s completely gone. Unfortunately, that isn’t always the case. While sometimes data is inherently secure through techniques like encryption or encoding, not all device designs provide secure means of data removal.

Just how hard is it to remove embedded device data?

For electronic media, the data must be both erased and overwritten – only then is the data securely deleted from the drive. Some use cases that demand such a thorough level of data erasure include: temporary storage of secure data (for example, a web browser cache), or when changing users on a shared device – or when a device will be sold. Another use case could be in the event of remote theft – a “kill pill” to remove secure data before hackers gain access.

Data that’s removed from such devices can sometimes be recovered – a potentially significant security risk. Suboptimal data removal can lead to sensitive data falling into the wrong hands, and may even reduce the lifetime of the device itself. For these reasons, methods like secure erase are used to make sure data that needs to be disposed of gets properly removed, without the possibility of recovery.

Overwriting data for proper security

Secure erase is a data sanitization method for completely erasing data off of a device. More specifically, it’s a group of firmware commands that together function as an interface for secure data removal. Importantly, secure erase does not simply move data to a different location on the device. Instead, sanitization methods like secure erase aim to permanently wipe data from the device, preventing recoverability.

Secure erase works by overwriting the data at its location with new data that’s random and useless (usually binary 1’s and 0’s). Once this overwriting has been accomplished, software-based data recovery methods (like file or partition recovery programs) won’t be able to recover the data. Furthermore, because secure erase is a command baked into the firmware, any missed write operations are checked – ensuring a more complete and watertight overwriting process.

The above overwriting process is also affected by the form of media on the device. NAND media, for example, is particularly tricky. It adds layers of difficulty to secure erasure as the data we want gone has to be written to a new location first – a technique is called “copy on write”.

While not everyone may agree on the very best method of data sanitization, secure erase is widely considered popular and reliable. It remains a good choice when a permanent solution is needed for data removal on embedded devices.

Secure erase and NAND at Embedded World 2021

Secure erase is a topic with a lot of detail – far too much for a single blog post. Join me this week at Embedded World, where I’ll be giving the following talk on secure erase on NAND media:

Title of talk: “Keeping device data safe with secure erase”

Session: 4.8 Safety & Security: Security Hardware

Date/Time: Wednesday, March 3, 2:00:00 PM – 2:30:00 PM (CET)

Abstract: Removing data securely from flash media is more challenging than older magnetic designs. The software and firmware must work in unison to provide secure solutions that are increasingly in demand. In this talk, we detail the secure interface from the application to the media and point out the possible pitfalls along the way.

After my talk, I’ll be online to answer your questions and talk about secure erase and NAND media.

Final thoughts

It is important to remember that for proper data security, how you get rid of the data is just as important as how you protect it while it’s kept on the device. It is not enough to store data securely and reliably – it must also be disposed of with the correct methods. Optimal data security is a process that encompasses the design of the entire embedded system – from the chosen media through the application itself.


Let’s talk about maximizing the security of your embedded device data.


Comparing protocols for USB devices – which one's more significant?

Universal Serial Bus (USB) is a commonly used interface for transferring data between devices. To achieve that communication, USB connected devices use file system protocols. Embedded devices connected through USB have two different ways of revealing the stored contents of their media: USB Mass Storage (UMS), and the Media Transfer Protocol (MTP). Both of these protocols allow files to be copied to and from host computers, dragged and dropped through GUI interfaces, and enable control of the media. Put simply, the protocol is what allows devices to communicate by way of USB. What are the benefits and disadvantages of these two protocols?

A tale of two protocols

Before I go any further, a little bit of background. Back in 2013, I spoke about comparing two protocols for USB devices. Examples of these designs, sporting operating systems and storage, ran the gamut from MP3 players, to smart phones, to handheld scanners and peripherals. Bluetooth was around, but uncommon for embedded designs and data transfer. As the years have rolled by, USB technology has become faster and more powerful. How do the protocols used measure up now, seven years later? Let’s first look at a technical description, and then an update for where we are today.

USB Mass Storage class, also called UMS, is a protocol that reveals the media block device to the connected host computer. In order for the host to use this storage, a common file system format is required. In other words, if a device with UMS has media formatted with exFAT, the host computer needs to have an exFAT driver to access those files. Usually the host driver has exclusive access to the media, and this is the situation on Windows where a user sees an “eject” option that should be used before removing the device. By comparison, a USB stick or key and an SD card both operate in the same fashion.

The Media Transfer Protocol, also called MTP, is another option. This is an extension of the Picture Transfer Protocol originally developed for cameras. A host computer with an MTP driver is able to communicate with the device at a higher level than the block storage, using packet commands to access and copy files. This removes the requirement that a host computer have a file system driver to match the media format. The device no longer has exclusive access, and no “eject” requirement exists for media connected with this protocol.

Seven years later – is interoperability the deciding factor?

Time has shown connectivity and interoperability to be highly salient factors for USB protocols. According to recent research, more host computers today are running Linux than seven years ago, with both Linux and Mac workstations being more common in business environments. Each of these have stable MTP drivers. Similarly, Android smart phones use ext4 as their default file system, and these connect seamlessly over MTP with a Windows 10 desktop – which has no file system driver for ext4. So if Google moves Android to another default Linux file system in the future, no changes will be required to host system MTP drivers.

Removing the “eject” requirement is another benefit of this difference. As long as the device has its own power, no loss of data can be expected when a resting device is unplugged – data cached on the device end will be written as normal. Device manufacturers are free to choose a reliable file system to match their design goals, instead of being limited to a standard to be matched on all host computers.

Final thoughts

While both MTP and UMS have their uses, the improved connectivity and file system interoperability afforded by MTP is a considerable advantage. And when it comes to preventing the corruption of valuable data in USB connected devices, that advantage becomes even more notable.

If you have any questions about USB or file systems, don’t hesitate to reach out to us. Our global team of file system experts with a specific focus on customer success.


Let’s talk more about keeping file systems data secure.


Wrapping up GENIVI AMM '20: Q&A and whitepaper

Thank you to Mike Nunnery and the GENIVI team for making this All Member Meeting a success at the end of October. While our booth didn’t have many attendees, our talk on Solving Data Storage Challenges in the Automotive Projects of Tomorrow was well attended. In fact, the video is available on Youtube.


Below are some of the questions we received, along with slightly more detailed answers.

How should I estimate the lifetime for a memory component?

This starts with a conversation with the memory vendor(s). They provide information in datasheets, and connecting their numbers to block sizes and then actual lifetime is pretty important math. The vendor can also help understand any firmware they have chosen for eMMC, UFS, NVMe and the like. Once your team can translate these values, you can examine write amplification and start making more comprehensive estimates.

Once the design team has prototype designs available, you can start testing and simulating use cases. Be sure to factor in how the design handles under extreme conditions, where bit errors (and subsequent cleanup) can be more frequent. We provide a flash testing service that can measure this level of detail.

As you look ahead towards 2021, where do you see the opportunities and challenges for the data storage market?

Security can provide both challenges and opportunities.

Just as we are working in locations we didn’t expect at the beginning of the year, we are also using devices in more locations than we previously expected. The challenge is figuring out the broad range of expected and unexpected options for a design. Availability for an over-the-air update is one that we spoke of (the FCA Uconnect bug), and I expect with autonomous driving situations, many more will occur.

An opportunity here is to respond to security issues with a consistent message. How to handle secure erase – from the firmware through the file system and into the application – is just one example. The flash media takes time to erase; but customers may not want the system to be unavailable while it does this work. How the system is designed to handle a power interruption and restoration during this work is also important.

With all the functional safety requirements, is this not handled automatically by the Flash memory components themselves and the associated operating system?

Broadly speaking, there are no certified memory components available. The biggest hurdle is determinism, because NAND media write times can vary with prior conditions and even the age of the part. While bit error correction, properly handled, should always return a complete block of data, we have also seen situations where that data is valid, but stale – part of a previous system state. Media redundancy could be an option to solve this sort of problem.

On properly functioning media, the next step is the file system. This can be built into the operating system or added later. Great strides are being made towards certification at this level, with solid designs and traceable testing. A related trend is tracking the process through methods like Automotive SPICE.

As you are looking at data storage challenges in automotive, particularly this year and as we move into next year, how would you prioritize these challenges, in terms of significance and what we need to be focusing on over the next 6-12 months?

During this session, we spoke about situations where programs and applications were writing more to the media than originally expected – Tesla and the desktop Spotify bug. I think it’s crucial to guard against the unknown future. This could perhaps be done by the hypervisor, limiting what a given guest OS is able to write, or at the level of the operating system (especially Android). File systems could play their part by utilizing a quota system or the like.

To an extent, this is a lifetime challenge, and I would prioritize this detail above all. As my colleague writes, cars aren’t cell phones, and most consumers won’t stand for a cell-phone like lifetime from any vehicle.

Other challenges include new technologies, security, and multi-channel multithreaded access from many applications and devices. I think some progress will be made on these in the next year, but they aren’t quite as crucial as preventing future failures from unknown applications.

Do you have any suggestions for finding the root cause of corrupted data?

When it comes to automotive file system challenges, data corruption is something we’ve had plenty of experience in tackling. I’ve written a whitepaper on this very issue, in which I talk about some of the ways we at Tuxera approach data corruption, including (but not limited to) issues in automotive systems.

Read the whitepaper here.

Final thoughts

Thank you once again for the interesting questions. Hopefully my answers have helped shed new light onto automotive file systems. And please enjoy the whitepaper.

Let us help you solve your data storage challenges.


The hidden costs of automotive storage decisions: flash media wear and MCU failures in 159,000 Tesla cars

Flash lifetime can’t be ignored. Late last year, Tesla had problems with the flash storage memory in its connected cars. The company was in the news again this month, when the Office of Defects Investigations released their report summarizing the MCU failures that affect approximately 159 thousand vehicles. This is interesting as much for the report as for the reaction among embedded developers, some of whom still don't understand that flash media has a limited lifetime.

Examining the metrics, visualizing the costs

Let’s take a closer look at the report itself. The Office of Defects Investigations report noted that there were 2,936 complaints, with thankfully no injuries, fatalities, crashes, or fires. Another 12,523 warranty and non-warranty claims for MCU replacements are also factored into this report. It is good that none of these MCU failures are directly related to safety. The closest problems related to total failure seem to be loss of HVAC controls (for defogging windows) and the Advanced Driver Assistance Support (ADAS).

What I found interesting about the report are Tesla's internal metrics for measuring the flash media wear in the vehicle. Each erase block on the Hynix media is rated for 3000 Program/Erase (P/E) cycles in total. Tesla described nominal daily P/E cycle use as a rate of 0.7 per block, and estimated for that rate that 11-12 years would be required to accumulate a total of 3000 P/E cycles per block. For the 8 GB media, that would work out to 5.6 GB written to the media per day. The file system writes considerably less than that, of course, due to write amplification.

Also highlighted were higher rate users, the 95th percentile of daily use. Tesla expected their P/E cycle use to rate as high as 1.5, where it would take 5-6 years to accumulate the maximum P/E cycles.

The rates of 0.7 and 1.5 are dependent on the chosen media and available space, of course. As of May 2020, Tesla remanufacturing began producing spare parts incorporating a Micron eMMC with 64 GB of storage. This should also bring those rates down by a factor of 8 – assuming the Micron part has a similar P/E cycle lifetime.

Importantly, all the complaints and claims for MCU replacements represent just a small percentage of 159 thousand Model S and Model X vehicles. Tesla did indicate that MCU failures are likely to continue to occur in subject vehicles until 100% of units have failed. An expensive replacement of either the media or the entire MCU board is the only alternative. Tesla has admitted as much, recently informing its customers by email that the eMMC in its faulty vehicles is warranted for 8y/160 kkm, and that they will replace it from 8 GB to 64 GB. Tesla has also agreed to reimburse old repairs. All in all, a costly outcome.

Can patching up the damage be enough?

Tesla has not been idle. Through OTA updates, Tesla has already released 6 firmware patches to help deal with the problem. These patches have at least tried to alleviate previously mentioned loss of HVAC controls and ADAS problems first. The patches overall have ranged from removing high frequency system log messages and suppressing informational log messages when no user was in the vehicle, to increasing the journal commit interval and reducing the settings database write frequency.

Unfortunately, these firmware patches are unlikely to be enough. Once P/E cycles are used, they cannot be regained without replacing the media. A patch late in the cycle will, at best, add only a year of life to the vehicle.

It is also unlikely that future automotive designs will be able to solve problems with reduced logging. If anything, data recording is expected to grow over the next decade. Hypervisors and domain controllers collect data from multiple sensors, storing to common media devices. Another larger source of growth will be autonomous vehicles, with multiple video streams and even more sensor data. These factors highlight the continuing importance of edge storage in the vehicle, as well as proper flash memory management.

Understand the storage stack – before things go wrong

So where should Tesla go from here to deal with all this? At Tuxera, we have encountered issues like Tesla’s numerous times. Our recommendation remains the same as when we wrote about this topic a year ago. Namely, that a complete and correct understanding of the memory devices (and their limitations) and other software components related to data management (the file system and flash management) are key to understanding systems that are designed to be robust. This is the approach that guides our continued collaboration with customers and partners on activities such as workload analysis, lifetime estimation, write amplification measures, and ultimately the selection of that data management software.

Final thoughts

As we have mentioned before, we’re fans of Tesla. But the Office of Defects Investigations report paints a picture of the potential damage that can result from an incomplete understanding of a vehicle’s storage stack. With proper flash memory testing methods unique to the needs of a given use case, flash memory failures can more effectively be prevented.

We work closely with OEMS and Tier 1s to identify flash needs specific to each unique use case. Let’s solve your automotive data storage challenges.


Wrapping up the Embedded Online Conference: Q&A

For a lot of 2020, we’ve been talking about avoiding end-of-life from NAND Correctable Errors. Recently, I spoke about this very topic at the Embedded Online Conference, where I got to digitally interact with many of you, and received your questions. For those not up to speed on the entire topic, please feel free to see the whitepaper we produced here. This topic brought up some interesting questions that I think warrant a little more discussion and digging.

All about the firmware

Perhaps the most common question was, “Where is the error management actually being handled?” For an example project – an ARM single board computer running Linux, with ext3 file systems on both microSD and eMMC – the answer starts with the firmware. This is special code written to work with the NAND flash media and controller. On Linux, there are also drivers to connect that firmware to a standard block device layer, allowing the developer to use block tools like encryption.

While error management is handled by the firmware, the file system can make requests which make that management much easier on the media, adding lifetime to the design. In this case, the interface used is known as Trim or Discard – a notification from the file system that blocks are no longer being used. Developers can use flash storage with the Trim or Discard notifications turned off, and they may see higher short-term performance – but both long term performance and media lifetime will suffer.

Handling errors on flash media designs

Another question I received was related to special flash media designs that contain a one-time programmable (OTP) section. This sort of read-only area can be used for system firmware or default configuration settings. Even that use case does not mean it is impossible for bit errors to occur there. If the OTP section is provided by the vendor (and their firmware), they may have a contingency to handle the situation – reprogramming in place while maintaining power. This is a question worth asking. If the OTP section is more of a design choice by the development team, I would suggest working with the vendor and a flash software team to make sure errors are properly handled. In such cases, optimized and tailored support is crucial. Our team at Tuxera offers design review services which may be helpful.

Some designs however use flash media that doesn’t have firmware. We refer to this as “raw flash”, and on Linux that can mean using a flash file system, such as YAFFS, JFFS2 or UBIFS. This software must include the error handling software which decides whether to ignore a bit error for now, or correct errors by relocating the data. Balancing this choice is dependent on use case and desired lifetime, and it’s something I discuss in our whitepaper. Unfortunately, the Linux flash file systems relocate the data on the first bit error, which can reduce lifetime considerably. This was a good choice when the NAND controllers could only handle error correction on 4 bits of data, but modern controllers can perform bit correction on 40 or more bits per NAND block.

Tuxera’s FlashFX Tera is a Linux solution which can handle these situations with ease. To learn more about it, click here.

Final thoughts

I’ve really appreciated getting to answer questions and discuss file systems with other enthusiasts in the Embedded Online Conference. Later this month, I’ll be speaking on the topic of automotive security at GENIVI AMM. It will be another great opportunity to talk to you about embedded software – this time on the topic of automotive safety. I’m looking forward to the questions and comments from all of you – perspectives that I’m sure will have me thinking about data storage in a new way.

Let us help you solve your data storage challenges.


The Embedded Online Conference – ongoing tech seminars a click away

Just last month, I spoke at the fully virtual Embedded Online Conference. Avoiding end of life from NAND correctable errors is a topic I’ve covered in the past, and it's still just as relevant when it comes to flash memory lifetime.

But just how did I end up speaking at the Embedded Online Conference in the first place?

I was on the train from Frankfurt to Nuremberg for Embedded World 2019, where I was speaking on a couple of topics, and manning the tradeshow booth. I pulled out my laptop to get a little work done, and noticed the gentleman across from me was doing the same. We got to chatting, and I found out that he, Jacob Beningo, also worked in the embedded systems industry, and was looking forward to the three-day show.

Jacob was a consultant, and pitched an interesting idea about an online conference. His idea was that attendees could go to virtual sessions, handle questions through a forum interface, and there would even be a virtual "trade show" floor with product demonstrations. I was definitely interested, and it turns out Jacob was ahead of his time.

Still live, still connected

Surveying my inbox, it looks like the remaining tradeshows this year are going virtual. Fortunately, the folks at Embedded Online Conference had a head-start. They put together a really nice site, with presentations "going live" at particular times. These sessions (and the show) will remain live through July – so you can watch the talks at your own pace, and leave questions and comments too. What's more, there is even a healthy discount on the registration page if you have been furloughed or laid off because of the Coronavirus – this is a great opportunity for training!

I've had some great questions from my talk, and I'm already thinking hard to come up with topics for next year's conference. Will I see you there?

Visit the Embedded Online Conference site


Help! Why are my embedded devices failing?

When devices fail, the problems can be numerous. In conversations with the embedded OEMs we work with, a common issue affects almost every manufacturer – the cost of diagnosing and fixing the causes of field failure. This impacts time-to-market and pulls resources away from development, to be used instead for field diagnostics and post-mortem analysis. This issue is especially relevant for the following reasons:

  1. The need for defect prevention during field operations: The high degree of reliability required for protecting critical data dictates that devices must not fail. To ensure that devices are wear-fail-safe, manufacturers are required to run extensive tests for a range of user scenarios so as to safeguard against edge cases. The analysis of test results can be a daunting task due to several interfaces between hardware, software, and application layers. Hence, there is a need to continuously track these interactions, so that during a failure, any difference in the interactions can be discovered and corrected.
  2. Vulnerability of device to wear-related failures: As flash media continues to increase in density and complexity, it’s also becoming more vulnerable to wear-related failures. With the shrinking lithography comes increased ECC requirements, and the move to more bits/cell. With this also comes a concern that what was written to the disk may not in fact be what is read off the disk. However, most applications assume that the data written to the file system will be completely accurate when read back. If the application does not fully validate the data read, there may be errors in the data that cause the application to fail, hang or just misbehave. These complications require checks to validate data read as against the data written, so as to prevent device failures due to data corruption.
  3. Complexity of hardware and software integration: The complex nature of hardware and software integration within embedded devices makes finding the cause of failures a painstaking job, one that requires coordination between several hardware and software vendors. For this reason, it often takes OEMs days to investigate causes at the file system layer alone. Problems below that layer can entail more extensive testing and involve multiple vendors. Log messages can help manufacturers pinpoint the location of failure so that the correct vendor can be notified.

This ability to pinpoint the cause of failure is especially helpful when an OEM is:

    • Troubleshooting during the manufacturing and testing process to make sure that their devices do not fail for the given user scenarios.
    • Doing post-mortem analysis on parts returned from their customers, in order to understand the reasons for failures, and possible solutions.
    • Required to maintain a log of interactions between the various parts of the device, for future assistance with failure prevention or optimization.

Identifying the causes and costs of field failure is one thing, but what solutions can OEMs turn to in order to prevent these issues in the first place?

Fighting field failure with transactional file systems

Thankfully, various file systems solutions exist for safeguarding critical data. FAT remains a simple and robust option with decent performance. Unfortunately, it isn’t able to provide the degree of data protection or performance that is sometimes needed. In safety-critical industries like automotive, aerospace, and industrial, basic file systems like FAT are often unable to meet the needed performance and reliability.

Transactional file systems like Tuxera’s own Reliance Edge offer a level of reliability, control, and performance for data that is simply too vital to be lost or corrupted. One of the key features of Reliance Edge is that it never overwrites live data, ensuring a backup version of that data remains safe and sound. This helps preserve user data in the event of power loss.

In the video below, I demonstrate the performance and data preservation differences between a FAT driver and Reliance Edge.


Final thoughts

Correctly finding and identifying the cause of field failures is the first step in tackling them. The next step is choosing the right solution – one that’s optimized to secure your critical data specifically in case of field failure and power loss.

Embedded device manufacturers – find out how Reliance Edge can help bulletproof your critical data.



Are existing SMB solutions scalable and can they cope with current demands?

One or two parents required to work from home, online learning for the children, and even recreation all consist of streaming media of some sort. For more secure work, VPN is often the norm. Can all aspects of the internet handle the traffic?

In the current circumstances of home-centric everything, internet service traffic has skyrocketed in both professional and recreational spheres. The streaming service Netflix, for example, saw an unprecedented gain of 15.8 million subscribers for the first quarter of 2020. In my own region of Seattle and Washington State, internet traffic is up considerably – 30% to 40% higher than in January of this year. In response, local service providers such as Comcast and T-Mobile have waived their bandwidth caps, at least in the short term. One of their concerns is this stress test of the "last-mile" services - the modems, routers and other components of home networks.

SMB protocol is more relevant than ever for shared content access

Besides the need for high throughput – or high transfer speeds – another concern is secure access to shared files, and this is where networking protocols come in. Home routers connect to the enterprise local area network (LAN), often through VPN. Many workers staying at home connect through individual paths to a few enterprise servers, and Server Message Block (SMB) is the protocol that allows the sharing of the common files they need to do their jobs.

SMB servers can be open source solutions or proprietary implementations. The most commonly used implementation is called Samba, a helpful open-source alternative. Tuxera maintains its own proprietary implementation – Fusion File Share by Tuxera – with commercial-grade SMB features and enhancements that will handle the current stresses content providers and enterprises are facing during the COVID-19 epidemic – multiple users accessing the same content over the network.

Scalability is critical when countless organizations have switched to remote work

The key measurement for the current situation is scalability, because these network protocols need to provide files to more than just a few people – we’re talking 10s, 100s, even 1000s in the case of a large global enterprise such as a banking or medical institution. Companies are worried if their storage solutions can handle all the load of remote work. When an entire company hits the shared file at once, will all their requests get through without serious delay or even critical failures?

Increased loads have shown Samba can easily max out CPU and memory usage at 100%. This illustrates the challenges facing SMB protocols in today’s crisis. While Samba can be tuned to handle speed issues, implementing proper security and scalability measures unfortunately demands more human and infrastructure resources, increasing costs.

Final thoughts

The increased networking demands we’ve discussed place significant stress on widely used SMB services, with results felt across multiple industries, from banks to medical institutions. These disruptions can put organizations that are integral to societal function at risk. What’s worse, these risks are exacerbated given the uncertain nature of the current pandemic. This wasn't the use case that most network solution providers envisioned, but this is where we are today. Networking protocols that are sluggish and unreliable are simply unacceptable in a world that requires rapid data access.

Thankfully, solutions do exist to help network providers easily tackle speed and scalability in SMB. Latency and client overload are something Tuxera has tested for in SMB networking events for years, and we stand proudly behind our solution.

But regardless of the solution chosen, network service providers must evaluate how they can stay prepared for the scalability and security needs of the crisis today – as well as the needs of tomorrow.



Are you formatting your SD memory cards optimally?

We are excited to share with you an article from one of our valued partners, the SD Association. The following is a snippet from the original article. Be sure to read the full article here: The SD Memory Card Formatter – How this handy tool solves your memory card formatting needs.

For many of us, SD memory cards are an easy way to keep our important files and precious memories stored safely. But after using an SD memory card for a long time, files may begin to fragment, which can result in performance deterioration of the card. That’s when we use simple reformatting methods to wipe cards clean in an effort to restore their reliability and performance. Proper formatting is therefore essential in keeping our critical document files and favorite photos or videos available for future viewing.

First-rate formatting with the SD Memory Card Formatter

When formatting an SD memory card, specific tools and methods are required in order to ensure an effective process with minimal data loss.

The SD Memory Card Formatter, developed by Tuxera, handles SD memory cards in accordance with standards defined by the SD Association. In fact, it’s the official tool for formatting any SD, SDHC, and SDXC memory cards, as recommended by the SD Association. By optimizing an SD memory card to SD Association standards, the SD Memory Card Formatter safely improves the card's performance and lifetime. Operating system (OS) built-in formatters are rarely tested as rigorously, and often may not follow these standards as closely, resulting in formatting processes that are less reliable – and potentially leading to sooner memory card failure.

The SD Memory Card Formatter is designed to be the best tool for the job, for virtually every type of user – offering you the highest level of reliability and data integrity for all of your formatting and reformatting needs.

Read the full article on the SD Association’s website for more information and technical details on how the SD Memory Card Formatter can help you.


Embedded World 2020 wrap-up: smaller players and display tech get the stage

Tuxera has attended Embedded World for many years now, it being one of the premier events for embedded technology not just in Europe, but the world. It’s always an excellent opportunity for us to speak directly to our partners and other industry players. This year was no different, and though an impact from fears of illness was expected – after all Mobile World Congress was cancelled – the event proved insightful and productive.

Breathing room for meaningful interactions – and fun

Attendance was roughly a third of the previous year, and many companies opted out of exhibiting – some at the last minute. The organizers however did a good job of covering for missing booths, moving exhibitors from hall 5 into hall 4, and setting up places to sit in many parts of hall 4A – including an area with a beach theme! Nevertheless, it was a bit weird to walk by some large, fully constructed booths with no people or equipment in them.

There were also fewer people whose sole job was to pass out flyers and invite you into their stand. That led to more substantial conversations with more knowledgeable booth staff for the attendees.

Key meetings and greetings

It also seemed that there was a higher ratio of academics-to-working-professionals. Unlike in prior years when the bulk of students visited on Thursday, eager pupils wandered into our booth on all three days. I had the opportunity to demonstrate our GPL version of Reliance Edge and hear about some of the interesting projects they were working on. Perhaps the added data integrity of our file system will lead to their success!

Another silver lining was the opportunity for exhibitors to spend time with other exhibitors – visiting booths, seeing the demonstrations, and comparing notes. We came away with a few more partnership opportunities than in previous years, when we were busy talking to designers and students. I am especially excited about opportunities with Toradex – who premiered a product to assist those migrating from Windows Embedded to Linux. We also had a chance to explore deepening our partnerships with Green Hills in automotive and Mentor Graphics in resource-constrained certified markets.

Industry trends on display

A big theme this year among exhibitors were graphical interfaces and display technologies. Many of the exhibitors were discussing graphical interfaces and ways to speed up debugging. There were also some truly impressive display technologies, including large transparent screens and flexible ones.

Without the large semiconductor companies at the event, smaller players had a chance to get their messages out. There were a lot of special purpose chip vendors around, but far less bleeding edge chips shown due to the lack of big player attendance. As a result, special purpose chips and bespoke systems developers (as well as open source consortia) had audiences that would have probably overlooked them while busy talking to impressive players like ST Microelectronics and Wind River Systems in other years. At least one customer we spoke with had made the decision not to attend directly due to ST Microelectronics’ absence.

Looking onwards

The dates for next year have already been selected, and it is likely we will attend. What will be interesting to see is the impact this year’s show has on our plans for later this year, and of course next year. We’re already getting excited for Electronica this November, and the chance to meet more big players there. Will large booth companies like ST Microelectronics be back in the same space? Can the organizers do anything to improve attendance? And what new trends will happen to embedded designs in the next year? Always interested to find out!