956 resultados para embedded, system, entropy, pool, TRNG, random, ADC
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
A novel and new thermal management technology for advanced ceramic microelectronic packages has been developed incorporating miniature heat pipes embedded in the ceramic substrate. The heat pipes use an axially grooved wick structure and water as the working fluid. Prototype substrate/heat pipe systems were fabricated using high temperature co-fired ceramic (alumina). The heat pipes were nominally 81 mm in length, 10 mm in width, and 4 mm in height, and were charged with approximately 50-80 mL of water. Platinum thick film heaters were fabricated on the surface of the substrate to simulate heat dissipating electronic components. Several thermocouples were affixed to the substrate to monitor temperature. One end of the substrate was affixed to a heat sink maintained at constant temperature. The prototypes were tested and shown to successful and reliably operate with thermal loads over 20 Watts, with thermal input from single and multiple sources along the surface of the substrate. Temperature distributions are discussed for the various configurations and the effective thermal resistance of the substrate/heat pipe system is calculated. Finite element analysis was used to support the experimental findings and better understand the sources of the system's thermal resistance.
Resumo:
Increased device density, switching speeds of integrated circuits and decrease in package size is placing new demands for high power thermal-management. The convectional method of forced air cooling with passive heat sink can handle heat fluxes up-to 3-5W/cm2; however current microprocessors are operating at levels of 100W/cm2, This demands the usage of novel thermal-management systems. In this work, water-cooling systems with active heat sink are embedded in the substrate. The research involved fabricating LTCC substrates of various configurations - an open-duct substrate, the second with thermal vias and the third with thermal vias and free-standing metal columns and metal foil. Thermal testing was performed experimentally and these results are compared with CFD results. An overall thermal resistance for the base substrate is demonstrated to be 3.4oC/W-cm2. Addition of thermal vias reduces the effective resistance of the system by 7times and further addition of free standing columns reduced it by 20times.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
Tissue engineering of biomimetic skeletal muscle may lead to development of new therapies for myogenic repair and generation of improved in vitro models for studies of muscle function, regeneration, and disease. For the optimal therapeutic and in vitro results, engineered muscle should recreate the force-generating and regenerative capacities of native muscle, enabled respectively by its two main cellular constituents, the mature myofibers and satellite cells (SCs). Still, after 20 years of research, engineered muscle tissues fall short of mimicking contractile function and self-repair capacity of native skeletal muscle. To overcome this limitation, we set the thesis goals to: 1) generate a highly functional, self-regenerative engineered skeletal muscle and 2) explore mechanisms governing its formation and regeneration in vitro and survival and vascularization in vivo.
By studying myogenic progenitors isolated from neonatal rats, we first discovered advantages of using an adherent cell fraction for engineering of skeletal muscles with robust structure and function and the formation of a SC pool. Specifically, when synergized with dynamic culture conditions, the use of adherent cells yielded muscle constructs capable of replicating the contractile output of native neonatal muscle, generating >40 mN/mm2 of specific force. Moreover, tissue structure and cellular heterogeneity of engineered muscle constructs closely resembled those of native muscle, consisting of aligned, striated myofibers embedded in a matrix of basal lamina proteins and SCs that resided in native-like niches. Importantly, we identified rapid formation of myofibers early during engineered muscle culture as a critical condition leading to SC homing and conversion to a quiescent, non-proliferative state. The SCs retained natural regenerative capacity and activated, proliferated, and differentiated to rebuild damaged myofibers and recover contractile function within 10 days after the muscle was injured by cardiotoxin (CTX). The resulting regenerative response was directly dependent on the abundance of SCs in the engineered muscle that we varied by expanding starting cell population under different levels of basic fibroblast growth factor (bFGF), an inhibitor of myogenic differentiation. Using a dorsal skinfold window chamber model in nude mice, we further demonstrated that within 2 weeks after implantation, initially avascular engineered muscle underwent robust vascularization and perfusion and exhibited improved structure and contractile function beyond what was achievable in vitro.
To enhance translational value of our approach, we transitioned to use of adult rat myogenic cells, but found that despite similar function to that of neonatal constructs, adult-derived muscle lacked regenerative capacity. Using a novel platform for live monitoring of calcium transients during construct culture, we rapidly screened for potential enhancers of regeneration to establish that many known pro-regenerative soluble factors were ineffective in stimulating in vitro engineered muscle recovery from CTX injury. This led us to introduce bone marrow-derived macrophages (BMDMs), an established non-myogenic contributor to muscle repair, to the adult-derived constructs and to demonstrate remarkable recovery of force generation (>80%) and muscle mass (>70%) following CTX injury. Mechanistically, while similar patterns of early SC activation and proliferation upon injury were observed in engineered muscles with and without BMDMs, a significant decrease in injury-induced apoptosis occurred only in the presence of BMDMs. The importance of preventing apoptosis was further demonstrated by showing that application of caspase inhibitor (Q-VD-OPh) yielded myofiber regrowth and functional recovery post-injury. Gene expression analysis suggested muscle-secreted tumor necrosis factor-α (TNFα) as a potential inducer of apoptosis as common for muscle degeneration in diseases and aging in vivo. Finally, we showed that BMDM incorporation in engineered muscle enhanced its growth, angiogenesis, and function following implantation in the dorsal window chambers in nude mice.
In summary, this thesis describes novel strategies to engineer highly contractile and regenerative skeletal muscle tissues starting from neonatal or adult rat myogenic cells. We find that age-dependent differences of myogenic cells distinctly affect the self-repair capacity but not contractile function of engineered muscle. Adult, but not neonatal, myogenic progenitors appear to require co-culture with other cells, such as bone marrow-derived macrophages, to allow robust muscle regeneration in vitro and rapid vascularization in vivo. Regarding the established roles of immune system cells in the repair of various muscle and non-muscle tissues, we expect that our work will stimulate the future applications of immune cells as pro-regenerative or anti-inflammatory constituents of engineered tissue grafts. Furthermore, we expect that rodent studies in this thesis will inspire successful engineering of biomimetic human muscle tissues for use in regenerative therapy and drug discovery applications.
Resumo:
The observation chart is for many health professionals (HPs) the primary source of objective information relating to the health of a patient. Information Systems (IS) research has demonstrated the positive impact of good interface design on decision making and it is logical that good observation chart design can positively impact healthcare decision making. Despite the potential for good observation chart design, there is a paucity of observation chart design literature, with the primary source of literature leveraging Human Computer Interaction (HCI) literature to design better charts. While this approach has been successful, this design approach introduces a gap between understanding of the tasks performed by HPs when using charts and the design features implemented in the chart. Good IS allow for the collection and manipulation of data so that it can be presented in a timely manner that support specific tasks. Good interface design should therefore consider the specific tasks being performed prior to designing the interface. This research adopts a Design Science Research (DSR) approach to formalise a framework of design principles that incorporates knowledge of the tasks performed by HPs when using observation charts and knowledge pertaining to visual representations of data and semiology of graphics. This research is presented in three phases, the initial two phases seek to discover and formalise design knowledge embedded in two situated observation charts: the paper-based NEWS chart developed by the Health Service Executive in Ireland and the electronically generated eNEWS chart developed by the Health Information Systems Research Centre in University College Cork. A comparative evaluation of each chart is also presented in the respective phases. Throughout each of these phases, tentative versions of a design framework for electronic vital sign observation charts are presented, with each subsequent iteration of the framework (versions Alpha, Beta, V0.1 and V1.0) representing a refinement of the design knowledge. The design framework will be named the framework for the Retrospective Evaluation of Vital Sign Information from Early Warning Systems (REVIEWS). Phase 3 of the research presents the deductive process for designing and implementing V0.1 of the framework, with evaluation of the instantiation allowing for the final iteration V1.0 of the framework. This study makes a number of contributions to academic research. First the research demonstrates that the cognitive tasks performed by nurses during clinical reasoning can be supported through good observation chart design. Secondly the research establishes the utility of electronic vital sign observation charts in terms of supporting the cognitive tasks performed by nurses during clinical reasoning. Third the framework for REVIEWS represents a comprehensive set of design principles which if applied to chart design will improve the usefulness of the chart in terms of supporting clinical reasoning. Fourth the electronic observation chart that emerges from this research is demonstrated to be significantly more useful than previously designed charts and represents a significant contribution to practice. Finally the research presents a research design that employs a combination of inductive and deductive design activities to iterate on the design of situated artefacts.
Resumo:
The work presented in this thesis examines the properties of BPEs of various configurations and under different operating conditions in a large planar LEC system. Detailed analysis of time-lapsed fluorescence images allows us to calculate the doping propagation speed from the BPEs. By introducing a linear array of BPEs or dispersed ITO particles, multiple light-emitting junctions or a bulk homojunction have been demonstrated. In conclusion, it has been observed that both applied bias voltages and sizes of BPEs affected the electrochemical doping from the BPE. If the applied bias voltage was initially not sufficiently high enough, a delay in appearance of doping from the BPE would take place. Experiments of parallel BPEs with different sizes (large, medium, small) demonstrate that the potential difference across the BPEs has played a vital role in doping initiation. Also, the p-doping propagation distance from medium-sized BPE has displayed an exponential growth over the time-span of 70 seconds. Experiments with a linear array of BPEs with the same size demonstrate that the doping propagation speed of each floating BPE was the same regardless of its position between the driving electrodes. Probing experiments under high driving voltages further demonstrated the potential of having a much more efficient light emission from an LEC with multiple BPEs.
Resumo:
The Lagrangian progression of a biological community was followed in a filament of the Mauritanian upwelling system, north-west Africa, during offshore advection. The inert dual tracers sulfur hexafluoride and helium-3 labelled a freshly upwelled patch of water that was mapped for 8 days. Changes in biological, physical, and chemical characteristics were measured, including phytoplankton productivity, nitrogen assimilation, and regeneration. Freshly upwelled water contained high nutrient concentrations but was depleted in N compared to Redfield stoichiometry. The highest rate of primary productivity was measured on the continental shelf, associated with high rates of nitrogen assimilation and a phytoplankton community dominated by diatoms and flagellates. Indicators of phytoplankton abundance and activity decreased as the labelled water mass transited the continental shelf slope into deeper water, possibly linked to the mixed layer depth exceeding the light penetration depth. By the end of the study, the primary productivity rate decreased and was associated with lower rates of nitrogen assimilation and lower nutrient concentrations. Nitrogen regeneration and assimilation took place simultaneously. Results highlighted the importance of regenerated NHC 4 in sustaining phytoplankton productivity and indicate that the upwelled NO3 pool contained an increasing fraction of regenerated NO3 as it advected offshore. By calculating this fraction and incorporating it into an f ratio formulation, we estimated that of the 12:38Tg C of annual regional production, 4:73Tg C was exportable.
Resumo:
The Lagrangian progression of a biological community was followed in a filament of the Mauritanian upwelling system, north-west Africa, during offshore advection. The inert dual tracers sulfur hexafluoride and helium-3 labelled a freshly upwelled patch of water that was mapped for 8 days. Changes in biological, physical, and chemical characteristics were measured, including phytoplankton productivity, nitrogen assimilation, and regeneration. Freshly upwelled water contained high nutrient concentrations but was depleted in N compared to Redfield stoichiometry. The highest rate of primary productivity was measured on the continental shelf, associated with high rates of nitrogen assimilation and a phytoplankton community dominated by diatoms and flagellates. Indicators of phytoplankton abundance and activity decreased as the labelled water mass transited the continental shelf slope into deeper water, possibly linked to the mixed layer depth exceeding the light penetration depth. By the end of the study, the primary productivity rate decreased and was associated with lower rates of nitrogen assimilation and lower nutrient concentrations. Nitrogen regeneration and assimilation took place simultaneously. Results highlighted the importance of regenerated NHC 4 in sustaining phytoplankton productivity and indicate that the upwelled NO3 pool contained an increasing fraction of regenerated NO3 as it advected offshore. By calculating this fraction and incorporating it into an f ratio formulation, we estimated that of the 12:38Tg C of annual regional production, 4:73Tg C was exportable.
Resumo:
The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.
Resumo:
Effective natural resource policy depends on knowing what is needed to sustain a resource and building the capacity to identify, develop, and implement flexible policies. This retrospective case study applies resilience concepts to a 16-year citizen science program and vernal pool regulatory development process in Maine, USA. We describe how citizen science improved adaptive capacities for innovative and effective policies to regulate vernal pools. We identified two core program elements that allowed people to act within narrow windows of opportunity for policy transformation, including (1) the simultaneous generation of useful, credible scientific knowledge and construction of networks among diverse institutions, and (2) the formation of diverse leadership that promoted individual and collective abilities to identify problems and propose policy solutions. If citizen science program leaders want to promote social-ecological systems resilience and natural resource policies as outcomes, we recommend they create a system for internal project evaluation, publish scientific studies using citizen science data, pursue resources for program sustainability, and plan for leadership diversity and informal networks to foster adaptive governance.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Current Ambient Intelligence and Intelligent Environment research focuses on the interpretation of a subject’s behaviour at the activity level by logging the Activity of Daily Living (ADL) such as eating, cooking, etc. In general, the sensors employed (e.g. PIR sensors, contact sensors) provide low resolution information. Meanwhile, the expansion of ubiquitous computing allows researchers to gather additional information from different types of sensor which is possible to improve activity analysis. Based on the previous research about sitting posture detection, this research attempts to further analyses human sitting activity. The aim of this research is to use non-intrusive low cost pressure sensor embedded chair system to recognize a subject’s activity by using their detected postures. There are three steps for this research, the first step is to find a hardware solution for low cost sitting posture detection, second step is to find a suitable strategy of sitting posture detection and the last step is to correlate the time-ordered sitting posture sequences with sitting activity. The author initiated a prototype type of sensing system called IntelliChair for sitting posture detection. Two experiments are proceeded in order to determine the hardware architecture of IntelliChair system. The prototype looks at the sensor selection and integration of various sensor and indicates the best for a low cost, non-intrusive system. Subsequently, this research implements signal process theory to explore the frequency feature of sitting posture, for the purpose of determining a suitable sampling rate for IntelliChair system. For second and third step, ten subjects are recruited for the sitting posture data and sitting activity data collection. The former dataset is collected byasking subjects to perform certain pre-defined sitting postures on IntelliChair and it is used for posture recognition experiment. The latter dataset is collected by asking the subjects to perform their normal sitting activity routine on IntelliChair for four hours, and the dataset is used for activity modelling and recognition experiment. For the posture recognition experiment, two Support Vector Machine (SVM) based classifiers are trained (one for spine postures and the other one for leg postures), and their performance evaluated. Hidden Markov Model is utilized for sitting activity modelling and recognition in order to establish the selected sitting activities from sitting posture sequences.2. After experimenting with possible sensors, Force Sensing Resistor (FSR) is selected as the pressure sensing unit for IntelliChair. Eight FSRs are mounted on the seat and back of a chair to gather haptic (i.e., touch-based) posture information. Furthermore, the research explores the possibility of using alternative non-intrusive sensing technology (i.e. vision based Kinect Sensor from Microsoft) and find out the Kinect sensor is not reliable for sitting posture detection due to the joint drifting problem. A suitable sampling rate for IntelliChair is determined according to the experiment result which is 6 Hz. The posture classification performance shows that the SVM based classifier is robust to “familiar” subject data (accuracy is 99.8% with spine postures and 99.9% with leg postures). When dealing with “unfamiliar” subject data, the accuracy is 80.7% for spine posture classification and 42.3% for leg posture classification. The result of activity recognition achieves 41.27% accuracy among four selected activities (i.e. relax, play game, working with PC and watching video). The result of this thesis shows that different individual body characteristics and sitting habits influence both sitting posture and sitting activity recognition. In this case, it suggests that IntelliChair is suitable for individual usage but a training stage is required.