Context Is Everything

Knowing what’s around a chip determines success and failure of designs.

popularity

With consumer and industrial IoT applications, the importance of system context to IC vendors is paramount. No more are the days of developing a chip in isolation; close partnership with systems companies is de rigueur as they provide the use case data that is foundational to development of systems that work.

While this makes sense in a smartphone, it’s significantly harder to achieve in an IoT device for a couple reasons. First, many of these devices require low cost but they’re developed in lower volumes. Moreover, time to market is a competitive advantage, which greatly increases the pressure to churn out chips quickly while also understanding how those chips will behave in the context of not only one system, but other systems to a device is connected.

The first step in all of this is defining you mean by ‘system,’ and ‘system-level design.’

“Your system is a component for the next system higher up,” said Frank Schirrmeister, senior group director for product management in Cadence’s System & Verification Group. “A lot of people, when talking about system-level design, are talking about the chip or the SoC. That’s certainly one level. But then you think about the SoC within its system context, such as thousands of those spawning within a wireless network environment, and you have very unique challenges there.”

When developing for the IoT, there are very specific challenges at the edge node level.

“In IoT, there’s actually a question of how much you are simulating the full system,” Schirrmeister said. “You have to be careful with those IoT chips because if you are just counting on making money from the silicon in the IoT edge node chip itself, then there may not be much revenue and margin there. The value is in the full system, so in your fitness tracker it’s in the combination of the tracker itself — which you actually want to keep as cheap as possible, and ideally give away — to the hub (smartphone, car, TV) to the network, and to the cloud where all of the data is accumulated. You need to look at it systemically to figure out where the value is.”

Consider a Beddit sleep tracker, for example. The cost for the heart rate tracker is straightforward enough. But the overall value may include the ability to monetize advertising if the wearer is snoring.

In this context, the whole system needs to be simulated to make sure it works well, which goes back to older system-level design-type tools such as the Ptolemy Project at UC Berkeley, or a tool called BONES (block- oriented network simulator) from Cadence and Mathworks, among others. Interestingly, these system configuration tools might become more important.

“This is now the system architecture of what happens to my bandwidth if suddenly a thousand users in parallel on (U.S. highway) 101 upload their data on how fast they are going, and what that means for the network,” Schirrmeister said. “That system-level simulation is typically done with tools like Ptolemy, Mathworks, but it’s actually becoming more and more almost like IT configuration. It’s the same type of challenge as an IT department configuring computers between different buildings. How much network load is there? It’s a system-level challenge.”

To be sure, model-based development tools are alive and well to assess how an overall system looks, and in the specific case of IoT, the challenge from a monetary perspective is that there are not that many users and licenses. It’s a classic architecture problem.

How semiconductor companies make money at this will come down to very clearly specifying the system. Optimizing usage of their chips in the system is key, along with their systems engineers making sure that the system works because end users are demanding it. If there is a situation in which there isn’t enough bandwidth or there is too much latency, end users will get upset with the service and stop using it, so it’s really an indirect monetization, Schirrmeister observed.

The modeling investment
When looking at the investment and the tasks to support system-level modeling, as well as the related virtual prototyping, one dimension that needs to be considered is the use case for the models.

“Depending on what you want to do with your virtual prototype, it places requirements on the individual models,” said Pat Sheridan, product marketing senior staff for virtual prototyping at Synopsys. “We know the use cases of virtual prototyping for software development. This where the requirements are, for loosely-timed TLM-2.0 models, that can represent register-accurate representation of the platform. They can run fast, they can run the actual binary that would go on the end product, so it’s a software development activity, and some early integration of hardware and software. That places certain requirements on the models, so this is an important part of the context.”

Architectures add their own demands on models, particularly for performance or power analysis. Here, there need to be some timing requirements on the models so that as the actual workload representing the application is run, the design team can get a sense for the performance and the power consumption. This carries a whole set of interesting requirements on the models. From a high level, this area probably gets the most discussion. Engineers understand there’s a TLM standard, IEEE 1666 SystemC TLM-2.0, which supports both of those. But you have to understand which one you are going for to gauge the investment, Sheridan pointed out.

For an IoT application, from an architecture point of view, this can get complicated. “One of the things that people might look at is what processing do they want to have happen in the device, and what do they want to offload to the cloud or whatever it is communicating with,” he said. “There’s this partitioning of the processing, and that will have an impact on the performance of the device. But it also impacts the power consumption and where the software development effort needs to focus. The software stack that’s local is going to be tuned to what is going to happen locally versus what’s going to be communicated with and happening in the network or the cloud. This can have a big impact on the definition of the product, and then the type of simulation you would want to do early to confirm that you’re getting enough throughput to be able to do that —and that you’re able to process the information from the sensors in the IoT device, and communicate that properly to the resources in the cloud.”

Another important dimension for ROI is keeping the project dimension in mind, he stressed. “Is it a production project or a research project? Are you trying to deploy these models to deliver software in the context of a production project early for alignment with hardware schedules, or, is it a research project? Are you just trying to understand what’s possible, and doing a pilot where you’re looking at maybe a previous design and experimenting for the first time or in a group. Or maybe somebody is bringing their personal background and skill to the new team and they want to be able to show how it works. That’s a big part of the context. What is the production aspect of it, and therefore who is involved in the activity and what are the milestones? What are the demands from a schedule point of view?”

One of the main differences between a production versus a research project is that in a production design there will be existing investments in a model library, so there may be investment in trading some internal component TLM models as well as leveraging third-party options. Here, engineering teams must get a jump on the available models and be very clear on what they must do themselves. They also will need a modeling environment, which includes the tools given to the users to be able to enforce the modeling style in the most productive way possible. In a production project, there’s much less willingness to play around. Progress must be made, and users want tools that are going to help them do that. In the pilot project sense, it is more educational in some ways, or creating a proof point that makes sense in project that can then be shared.

Another consideration for context is the skill set of the person doing the modeling or using the models, said Sheridan. “What is their background? Do they have to learn modeling coming from a hardware design background? Or do they have a software design background and it’s easier for them to apply that to the development of a software-based transaction level model? This is an important aspect when you look at how you want to staff one of these projects, and where you start because there may be some activity related to education at that level that’s important.”

Making money in the IoT

Darrell Teegarden, project manager for mechatronics within the SLE systemvision.com team at Mentor Graphics, noted that the IoT splits into two major areas—IoT for consumer products, and industrial IoT. The fact that they both have IoT in the name implies they are related, but they really are very different.

“The biggest short-term monetary opportunity is in industrial IoT,” Teegarden said. “A lot of people are trying to figure out how to do that. The big opportunity in industrial IoT is to reduce cost, and there is a lot of low hanging fruit to go after the cost of an industrial situation. Big companies like GE — which is bigger than a lot of countries — are betting their whole future strategy on the use of Internet- connected devices as a way of managing cost of industrial applications. For consumer IoT apps, the money is going to be made with large volume, but there is so much competition — everybody has some kind of solution there. Differentiating for that is going to be tougher.”

To be sure, this puts pressure on the entire ecosystem to make sure everything works.

“Whenever you are trying to design and promote a new silicon device, to get the design win is the most important goal of the semiconductor supplier because the time-to-money is driven by the time to volume. Getting those design wins, getting them early, and getting them before the competitor does is the whole ball game, because whoever gets to volume fastest takes that profit and can move on to the next thing. Everybody else is fighting over scraps after a certain point. What will give some semi company an advantage over their competitor in getting that design win is being able to put that part in the context of the application, where the person who is making the design decision is going to see it,” he said.

One approach to this is offering reference design boards.

“Anytime a new part comes out there’s also a collection of reference designs that make the job of designing in that part as easy as possible, so somebody can order this board. They are buying a part but ordering a board, and the board has everything on it, and they go about figuring out if this will work. They can prototype it into their systems and see if it’s going to work, and then go back and design the form factor to fit whatever they need,” Teegarden said.

Similarly, from a virtual standpoint, Schirrmeister said engineering teams have started performing multi-chip emulation where multiple chips are emulated to see how they connect, and how the protocols all work out to determine that the chip works in the system environment.

At a foundational level, this all speaks to context. Mastering that will be the determining factor as to which companies win and lose in an increasingly connected world. In that world, context is everything.



Leave a Reply


(Note: This name will be displayed publicly)