[PDF] Thermal Issues In Testing Of Advanced Systems On Chip eBook

Thermal Issues In Testing Of Advanced Systems On Chip Book in PDF, ePub and Kindle version is available to download in english. Read online anytime anywhere directly from your device. Click on the download button below to get a free pdf file of Thermal Issues In Testing Of Advanced Systems On Chip book. This book definitely worth reading, it is an incredibly well-written.

Thermal Issues in Testing of Advanced Systems on Chip

Author : Nima Aghaee Ghaleshahi
Publisher : Linköping University Electronic Press
Page : 219 pages
File Size : 24,94 MB
Release : 2015-09-23
Category :
ISBN : 9176859495

GET BOOK

Many cutting-edge computer and electronic products are powered by advanced Systems-on-Chip (SoC). Advanced SoCs encompass superb performance together with large number of functions. This is achieved by efficient integration of huge number of transistors. Such very large scale integration is enabled by a core-based design paradigm as well as deep-submicron and 3D-stacked-IC technologies. These technologies are susceptible to reliability and testing complications caused by thermal issues. Three crucial thermal issues related to temperature variations, temperature gradients, and temperature cycling are addressed in this thesis. Existing test scheduling techniques rely on temperature simulations to generate schedules that meet thermal constraints such as overheating prevention. The difference between the simulated temperatures and the actual temperatures is called temperature error. This error, for past technologies, is negligible. However, advanced SoCs experience large errors due to large process variations. Such large errors have costly consequences, such as overheating, and must be taken care of. This thesis presents an adaptive approach to generate test schedules that handle such temperature errors. Advanced SoCs manufactured as 3D stacked ICs experience large temperature gradients. Temperature gradients accelerate certain early-life defect mechanisms. These mechanisms can be artificially accelerated using gradient-based, burn-in like, operations so that the defects are detected before shipping. Moreover, temperature gradients exacerbate some delay-related defects. In order to detect such defects, testing must be performed when appropriate temperature-gradients are enforced. A schedule-based technique that enforces the temperature-gradients for burn-in like operations is proposed in this thesis. This technique is further developed to support testing for delay-related defects while appropriate gradients are enforced. The last thermal issue addressed by this thesis is related to temperature cycling. Temperature cycling test procedures are usually applied to safety-critical applications to detect cycling-related early-life failures. Such failures affect advanced SoCs, particularly through-silicon-via structures in 3D-stacked-ICs. An efficient schedule-based cycling-test technique that combines cycling acceleration with testing is proposed in this thesis. The proposed technique fits into existing 3D testing procedures and does not require temperature chambers. Therefore, the overall cycling acceleration and testing cost can be drastically reduced. All the proposed techniques have been implemented and evaluated with extensive experiments based on ITC’02 benchmarks as well as a number of 3D stacked ICs. Experiments show that the proposed techniques work effectively and reduce the costs, in particular the costs related to addressing thermal issues and early-life failures. We have also developed a fast temperature simulation technique based on a closed-form solution for the temperature equations. Experiments demonstrate that the proposed simulation technique reduces the schedule generation time by more than half.

Studying Simulations with Distributed Cognition

Author : Jonas Rybing
Publisher : Linköping University Electronic Press
Page : 115 pages
File Size : 11,43 MB
Release : 2018-03-20
Category :
ISBN : 9176853489

GET BOOK

Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.

Applications of Partial Polymorphisms in (Fine-Grained) Complexity of Constraint Satisfaction Problems

Author : Biman Roy
Publisher : Linköping University Electronic Press
Page : 57 pages
File Size : 27,15 MB
Release : 2020-03-23
Category :
ISBN : 9179298982

GET BOOK

In this thesis we study the worst-case complexity ofconstraint satisfaction problems and some of its variants. We use methods from universal algebra: in particular, algebras of total functions and partial functions that are respectively known as clones and strong partial clones. The constraint satisfactionproblem parameterized by a set of relations ? (CSP(?)) is the following problem: given a set of variables restricted by a set of constraints based on the relations ?, is there an assignment to thevariables that satisfies all constraints? We refer to the set ? as aconstraint language. The inverse CSPproblem over ? (Inv-CSP(?)) asks the opposite: given a relation R, does there exist a CSP(?) instance with R as its set of models? When ? is a Boolean language, then we use the term SAT(?) instead of CSP(?) and Inv-SAT(?) instead of Inv-CSP(?). Fine-grained complexity is an approach in which we zoom inside a complexity class and classify theproblems in it based on their worst-case time complexities. We start by investigating the fine-grained complexity of NP-complete CSP(?) problems. An NP-complete CSP(?) problem is said to be easier than an NP-complete CSP(?) problem if the worst-case time complexity of CSP(?) is not higher thanthe worst-case time complexity of CSP(?). We first analyze the NP-complete SAT problems that are easier than monotone 1-in-3-SAT (which can be represented by SAT(R) for a certain relation R), and find out that there exists a continuum of such problems. For this, we use the connection between constraint languages and strong partial clones and exploit the fact that CSP(?) is easier than CSP(?) when the strong partial clone corresponding to ? contains the strong partial clone of ?. An NP-complete CSP(?) problem is said to be the easiest with respect to a variable domain D if it is easier than any other NP-complete CSP(?) problem of that domain. We show that for every finite domain there exists an easiest NP-complete problem for the ultraconservative CSP(?) problems. An ultraconservative CSP(?) is a special class of CSP problems where the constraint language containsall unary relations. We additionally show that no NP-complete CSP(?) problem can be solved insub-exponential time (i.e. in2^o(n) time where n is the number of variables) given that theexponentialtime hypothesisis true. Moving to classical complexity, we show that for any Boolean constraint language ?, Inv-SAT(?) is either in P or it is coNP-complete. This is a generalization of an earlier dichotomy result, which was only known to be true for ultraconservative constraint languages. We show that Inv-SAT(?) is coNP-complete if and only if the clone corresponding to ? contains essentially unary functions only. For arbitrary finite domains our results are not conclusive, but we manage to prove that theinversek-coloring problem is coNP-complete for each k>2. We exploit weak bases to prove many of theseresults. A weak base of a clone C is a constraint language that corresponds to the largest strong partia clone that contains C. It is known that for many decision problems X(?) that are parameterized bya constraint language ?(such as Inv-SAT), there are strong connections between the complexity of X(?) and weak bases. This fact can be exploited to achieve general complexity results. The Boolean domain is well-suited for this approach since we have a fairly good understanding of Boolean weak bases. In the final result of this thesis, we investigate the relationships between the weak bases in the Boolean domain based on their strong partial clones and completely classify them according to the setinclusion. To avoid a tedious case analysis, we introduce a technique that allows us to discard a largenumber of cases from further investigation.

Computational Complexity of some Optimization Problems in Planning

Author : Meysam Aghighi
Publisher : Linköping University Electronic Press
Page : 35 pages
File Size : 15,27 MB
Release : 2017-05-17
Category :
ISBN : 9176855198

GET BOOK

Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.

Analysis, Design, and Optimization of Embedded Control Systems

Author : Amir Aminifar
Publisher : Linköping University Electronic Press
Page : 155 pages
File Size : 16,69 MB
Release : 2016-02-18
Category : Control systems
ISBN : 917685826X

GET BOOK

Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account. In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems. Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.

Distributed Moving Base Driving Simulators

Author : Anders Andersson
Publisher : Linköping University Electronic Press
Page : 42 pages
File Size : 48,22 MB
Release : 2019-04-30
Category :
ISBN : 9176850900

GET BOOK

Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.

Scalable and Efficient Probabilistic Topic Model Inference for Textual Data

Author : Måns Magnusson
Publisher : Linköping University Electronic Press
Page : 75 pages
File Size : 13,23 MB
Release : 2018-04-27
Category :
ISBN : 9176852881

GET BOOK

Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model. Probabilistiska ämnesmodeller (topic models) är en mångsidig klass av modeller för att estimera ämnessammansättningar i större corpusar. Applikationer finns i ett flertal vetenskapsområden som teknik, naturvetenskap, samhällsvetenskap och humaniora. I denna avhandling föreslås nya effektiva och parallella Markov Chain Monte Carlo algoritmer för Bayesianska ämnesmodeller. De föreslagna metoderna skalar väl med storleken på corpuset och kan användas för flera olika ämnesmodeller och liknande modeller inom språkteknologi. De föreslagna metoderna är snabba, effektiva, skalbara och konvergerar till den sanna posteriorfördelningen. Dessutom föreslås en ämnesmodell för högdimensionell textklassificering, med tonvikt på tolkningsbar dokumentklassificering genom att använda en kraftigt regulariserande priorifördelningar. Slutligen utvecklas en ämnesmodell för att analyzera "agenda" och "framing" för ett förutbestämt ämne. Med denna metod analyserar vi invandringsdiskursen i Sveriges Riksdag över tid, genom att kombinera teori från statsvetenskap, kommunikationsvetenskap och probabilistiska ämnesmodeller.

Parameterized Verification of Synchronized Concurrent Programs

Author : Zeinab Ganjei
Publisher : Linköping University Electronic Press
Page : 192 pages
File Size : 20,82 MB
Release : 2021-03-19
Category :
ISBN : 9179296971

GET BOOK

There is currently an increasing demand for concurrent programs. Checking the correctness of concurrent programs is a complex task due to the interleavings of processes. Sometimes, violation of the correctness properties in such systems causes human or resource losses; therefore, it is crucial to check the correctness of such systems. Two main approaches to software analysis are testing and formal verification. Testing can help discover many bugs at a low cost. However, it cannot prove the correctness of a program. Formal verification, on the other hand, is the approach for proving program correctness. Model checking is a formal verification technique that is suitable for concurrent programs. It aims to automatically establish the correctness (expressed in terms of temporal properties) of a program through an exhaustive search of the behavior of the system. Model checking was initially introduced for the purpose of verifying finite‐state concurrent programs, and extending it to infinite‐state systems is an active research area. In this thesis, we focus on the formal verification of parameterized systems. That is, systems in which the number of executing processes is not bounded a priori. We provide fully-automatic and parameterized model checking techniques for establishing the correctness of safety properties for certain classes of concurrent programs. We provide an open‐source prototype for every technique and present our experimental results on several benchmarks. First, we address the problem of automatically checking safety properties for bounded as well as parameterized phaser programs. Phaser programs are concurrent programs that make use of the complex synchronization construct of Habanero Java phasers. For the bounded case, we establish the decidability of checking the violation of program assertions and the undecidability of checking deadlock‐freedom. For the parameterized case, we study different formulations of the verification problem and propose an exact procedure that is guaranteed to terminate for some reachability problems even in the presence of unbounded phases and arbitrarily many spawned processes. Second, we propose an approach for automatic verification of parameterized concurrent programs in which shared variables are manipulated by atomic transitions to count and synchronize the spawned processes. For this purpose, we introduce counting predicates that related counters that refer to the number of processes satisfying some given properties to the variables that are directly manipulated by the concurrent processes. We then combine existing works on the counter, predicate, and constrained monotonic abstraction and build a nested counterexample‐based refinement scheme to establish correctness. Third, we introduce Lazy Constrained Monotonic Abstraction for more efficient exploration of well‐structured abstractions of infinite‐state non‐monotonic systems. We propose several heuristics and assess the efficiency of the proposed technique by extensive experiments using our open‐source prototype. Lastly, we propose a sound but (in general) incomplete procedure for automatic verification of safety properties for a class of fault‐tolerant distributed protocols described in the Heard‐Of (HO for short) model. The HO model is a popular model for describing distributed protocols. We propose a verification procedure that is guaranteed to terminate even for unbounded number of the processes that execute the distributed protocol.

Orchestrating a Resource-aware Edge

Author : Klervie Toczé
Publisher : Linköping University Electronic Press
Page : 122 pages
File Size : 50,20 MB
Release : 2024-09-02
Category :
ISBN : 9180757480

GET BOOK

More and more services are moving to the cloud, attracted by the promise of unlimited resources that are accessible anytime, and are managed by someone else. However, hosting every type of service in large cloud datacenters is not possible or suitable, as some emerging applications have stringent latency or privacy requirements, while also handling huge amounts of data. Therefore, in recent years, a new paradigm has been proposed to address the needs of these applications: the edge computing paradigm. Resources provided at the edge (e.g., for computation and communication) are constrained, hence resource management is of crucial importance. The incoming load to the edge infrastructure varies both in time and space. Managing the edge infrastructure so that the appropriate resources are available at the required time and location is called orchestrating. This is especially challenging in case of sudden load spikes and when the orchestration impact itself has to be limited. This thesis enables edge computing orchestration with increased resource-awareness by contributing with methods, techniques, and concepts for edge resource management. First, it proposes methods to better understand the edge resource demand. Second, it provides solutions on the supply side for orchestrating edge resources with different characteristics in order to serve edge applications with satisfactory quality of service. Finally, the thesis includes a critical perspective on the paradigm, by considering sustainability challenges. To understand the demand patterns, the thesis presents a methodology for categorizing the large variety of use cases that are proposed in the literature as potential applications for edge computing. The thesis also proposes methods for characterizing and modeling applications, as well as for gathering traces from real applications and analyzing them. These different approaches are applied to a prototype from a typical edge application domain: Mixed Reality. The important insight here is that application descriptions or models that are not based on a real application may not be giving an accurate picture of the load. This can drive incorrect decisions about what should be done on the supply side and thus waste resources. Regarding resource supply, the thesis proposes two orchestration frameworks for managing edge resources and successfully dealing with load spikes while avoiding over-provisioning. The first one utilizes mobile edge devices while the second leverages the concept of spare devices. Then, focusing on the request placement part of orchestration, the thesis formalizes it in the case of applications structured as chains of functions (so-called microservices) as an instance of the Traveling Purchaser Problem and solves it using Integer Linear Programming. Two different energy metrics influencing request placement decisions are proposed and evaluated. Finally, the thesis explores further resource awareness. Sustainability challenges that should be highlighted more within edge computing are collected. Among those related to resource use, the strategy of sufficiency is promoted as a way forward. It involves aiming at only using the needed resources (no more, no less) with a goal of reducing resource usage. Different tools to adopt it are proposed and their use demonstrated through a case study.

Robust Stream Reasoning Under Uncertainty

Author : Daniel de Leng
Publisher : Linköping University Electronic Press
Page : 234 pages
File Size : 16,73 MB
Release : 2019-11-08
Category :
ISBN : 9176850137

GET BOOK

Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.