976 resultados para Digital Elevation Models
Resumo:
Modern manufacturing systems should satisfy emerging needs related to sustainable development. The design of sustainable manufacturing systems can be valuably supported by simulation, traditionally employed mainly for time and cost reduction. In this paper, a multi-purpose digital simulation approach is proposed to deal with sustainable manufacturing systems design through Discrete Event Simulation (DES) and 3D digital human modelling. DES models integrated with data on power consumption of the manufacturing equipment are utilized to simulate different scenarios with the aim to improve productivity as well as energy efficiency, avoiding resource and energy waste. 3D simulation based on digital human modelling is employed to assess human factors issues related to ergonomics and safety of manufacturing systems. The approach is implemented for the sustainability enhancement of a real manufacturing cell of the aerospace industry, automated by robotic deburring. Alternative scenarios are proposed and simulated, obtaining a significant improvement in terms of energy efficiency (−87%) for the new deburring cell, and a reduction of energy consumption around −69% for the coordinate measuring machine, with high potential annual energy cost savings and increased energy efficiency. Moreover, the simulation-based ergonomic assessment of human operator postures allows 25% improvement of the workcell ergonomic index.
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
The continuous advancement in computing, together with the decline in its cost, has resulted in technology becoming ubiquitous (Arbaugh, 2008, Gros, 2007). Technology is growing and is part of our lives in almost every respect, including the way we learn. Technology helps to collapse time and space in learning. For example, technology allows learners to engage with their instructors synchronously, in real time and also asynchronously, by enabling sessions to be recorded. Space and distance is no longer an issue provided there is adequate bandwidth, which determines the most appropriate format such text, audio or video. Technology has revolutionised the way learners learn; courses are designed; and ‘lessons’ are delivered, and continues to do so. The learning process can be made vastly more efficient as learners have knowledge at their fingertips, and unfamiliar concepts can be easily searched and an explanation found in seconds. Technology has also enabled learning to be more flexible, as learners can learn anywhere; at any time; and using different formats, e.g. text or audio. From the perspective of the instructors and L&D providers, technology offers these same advantages, plus easy scalability. Administratively, preparatory work can be undertaken more quickly even whilst student numbers grow. Learners from far and new locations can be easily accommodated. In addition, many technologies can be easily scaled to accommodate new functionality and/ or other new technologies. ‘Designing and Developing Digital and Blended Learning Solutions’ (5DBS), has been developed to recognise the growing importance of technology in L&D. This unit contains four learning outcomes and two assessment criteria, which is the same for all other units, besides Learning Outcome 3 which has three assessment criteria. The four learning outcomes in this unit are: • Learning Outcome 1: Understand current digital technologies and their contribution to learning and development solutions; • Learning Outcome 2: Be able to design blended learning solutions that make appropriate use of new technologies alongside more traditional approaches; • Learning Outcome 3: Know about the processes involved in designing and developing digital learning content efficiently and what makes for engaging and effective digital learning content; • Learning Outcome 4: Understand the issues involved in the successful implementation of digital and blended learning solutions. Each learning outcome is an individual chapter and each assessment unit is allocated its own sections within the respective chapters. This first chapter addresses the first learning outcome, which has two assessment criteria: summarise the range of currently available learning technologies; critically assess a learning requirement to determine the contribution that could be made through the use of learning technologies. The introduction to chapter one is in Section 1.0. Chapter 2 discusses the design of blended learning solutions in consideration of how digital learning technologies may support face-to-face and online delivery. Three learning theory sets: behaviourism; cognitivism; constructivism, are introduced, and the implication of each set of theory on instructional design for blended learning discussed. Chapter 3 centres on how relevant digital learning content may be created. This chapter includes a review of the key roles, tools and processes that are involved in developing digital learning content. Finally, Chapter 4 concerns delivery and implementation of digital and blended learning solutions. This chapter surveys the key formats and models used to inform the configuration of virtual learning environment software platforms. In addition, various software technologies which may be important in creating a VLE ecosystem that helps to enhance the learning experience, are outlined. We introduce the notion of personal learning environment (PLE), which has emerged from the democratisation of learning. We also review the roles, tools, standards and processes that L&D practitioners need to consider within a delivery and implementation of digital and blended learning solution.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
In Fall 2015, the Engineering and Physical Science Library (EPSL) began lending anatomical models as part of its course reserves program. EPSL received a partial skeleton and two muscle model figures from instructors of BSCI105. These models circulate for 4 hours at a time and are generally used by small, collaborative groups of students in the library. This poster will look at the challenges and rewards for adding these items to EPSL’s course reserves.
Resumo:
There are hundreds of millions of songs available to the public, necessitating the use of music recommendation systems to discover new music. Currently, such systems account for only the quantitative musical elements of songs, failing to consider aspects of human perception of music and alienating the listener’s individual preferences from recommendations. Our research investigated the relationships between perceptual elements of music, represented by the MUSIC model, with computational musical features generated through The Echo Nest, to determine how a psychological representation of music preference can be incorporated into recommendation systems to embody an individual’s music preferences. Our resultant model facilitates computation of MUSIC factors using The Echo Nest features, and can potentially be integrated into recommendation systems for improved performance.
Resumo:
Duchenne muscular dystrophy (DMD) is a neuromuscular disease caused by mutations in the dystrophin gene. DMD is clinically characterized by severe, progressive and irreversible loss of muscle function, in which most patients lose the ability to walk by their early teens and die by their early 20’s. Impaired intracellular calcium (Ca2+) regulation and activation of cell degradation pathways have been proposed as key contributors to DMD disease progression. This dissertation research consists of three studies investigating the role of intracellular Ca2+ in skeletal muscle dysfunction in different mouse models of DMD. Study one evaluated the role of Ca2+-activated enzymes (proteases) that activate protein degradation in excitation-contraction (E-C) coupling failure following repeated contractions in mdx and dystrophin-utrophin null (mdx/utr-/-) mice. Single muscle fibers from mdx/utr-/- mice had greater E-C coupling failure following repeated contractions compared to fibers from mdx mice. Moreover, protease inhibition during these contractions was sufficient to attenuate E-C coupling failure in muscle fibers from both mdx and mdx/utr-/- mice. Study two evaluated the effects of overexpressing the Ca2+ buffering protein sarcoplasmic/endoplasmic reticulum Ca2+-ATPase 1 (SERCA1) in skeletal muscles from mdx and mdx/utr-/- mice. Overall, SERCA1 overexpression decreased muscle damage and protected the muscle from contraction-induced injury in mdx and mdx/utr-/- mice. In study three, the cellular mechanisms underlying the beneficial effects of SERCA1 overexpression in mdx and mdx/utr-/- mice were investigated. SERCA1 overexpression attenuated calpain activation in mdx muscle only, while partially attenuating the degradation of the calpain target desmin in mdx/utr-/- mice. Additionally, SERCA1 overexpression decreased the SERCA-inhibitory protein sarcolipin in mdx muscle but did not alter levels of Ca2+ regulatory proteins (parvalbumin and calsequestrin) in either dystrophic model. Lastly, SERCA1 overexpression blunted the increase in endoplasmic reticulum stress markers Grp78/BiP in mdx mice and C/EBP homologous protein (CHOP) in mdx and mdx/utr-/- mice. Overall, findings from the studies presented in this dissertation provide new insight into the role of Ca2+ in muscle dysfunction and damage in different dystrophic mouse models. Further, these findings support the overall strategy for improving intracellular Ca2+ control for the development of novel therapies for DMD.
Resumo:
Maps depicting spatial pattern in the stability of summer greenness could advance understanding of how forest ecosystems will respond to global changes such as a longer growing season. Declining summer greenness, or “greendown”, is spectrally related to declining near-infrared reflectance and is observed in most remote sensing time series to begin shortly after peak greenness at the end of spring and extend until the beginning of leaf coloration in autumn,. Understanding spatial patterns in the strength of greendown has recently become possible with the advancement of Landsat phenology products, which show that greendown patterns vary at scales appropriate for linking these patterns to proposed environmental forcing factors. This study tested two non-mutually exclusive hypotheses for how leaf measurements and environmental factors correlate with greendown and decreasing NIR reflectance across sites. At the landscape scale, we used linear regression to test the effects of maximum greenness, elevation, slope, aspect, solar irradiance and canopy rugosity on greendown. Secondly, we used leaf chemical traits and reflectance observations to test the effect of nitrogen availability and intrinsic water use efficiency on leaf-level greendown, and landscape-level greendown measured from Landsat. The study was conducted using Quercus alba canopies across 21 sites of an eastern deciduous forest in North America between June and August 2014. Our linear model explained greendown variance with an R2=0.47 with maximum greenness as the greatest model effect. Subsequent models excluding one model effect revealed elevation and aspect were the two topographic factors that explained the greatest amount of greendown variance. Regression results also demonstrated important interactions between all three variables, with the greatest interaction showing that aspect had greater influence on greendown at sites with steeper slopes. Leaf-level reflectance was correlated with foliar δ13C (proxy for intrinsic water use efficiency), but foliar δ13C did not translate into correlations with landscape-level variation in greendown from Landsat. Therefore, we conclude that Landsat greendown is primarily indicative of landscape position, with a small effect of canopy structure, and no measureable effect of leaf reflectance. With this understanding of Landsat greendown we can better explain the effects of landscape factors on vegetation reflectance and perhaps on phenology, which would be very useful for studying phenology in the context of global climate change
Resumo:
BACKGROUND: Risk assessment is fundamental in the management of acute coronary syndromes (ACS), enabling estimation of prognosis. AIMS: To evaluate whether the combined use of GRACE and CRUSADE risk stratification schemes in patients with myocardial infarction outperforms each of the scores individually in terms of mortality and haemorrhagic risk prediction. METHODS: Observational retrospective single-centre cohort study including 566 consecutive patients admitted for non-ST-segment elevation myocardial infarction. The CRUSADE model increased GRACE discriminatory performance in predicting all-cause mortality, ascertained by Cox regression, demonstrating CRUSADE independent and additive predictive value, which was sustained throughout follow-up. The cohort was divided into four different subgroups: G1 (GRACE<141; CRUSADE<41); G2 (GRACE<141; CRUSADE≥41); G3 (GRACE≥141; CRUSADE<41); G4 (GRACE≥141; CRUSADE≥41). RESULTS: Outcomes and variables estimating clinical severity, such as admission Killip-Kimbal class and left ventricular systolic dysfunction, deteriorated progressively throughout the subgroups (G1 to G4). Survival analysis differentiated three risk strata (G1, lowest risk; G2 and G3, intermediate risk; G4, highest risk). The GRACE+CRUSADE model revealed higher prognostic performance (area under the curve [AUC] 0.76) than GRACE alone (AUC 0.70) for mortality prediction, further confirmed by the integrated discrimination improvement index. Moreover, GRACE+CRUSADE combined risk assessment seemed to be valuable in delineating bleeding risk in this setting, identifying G4 as a very high-risk subgroup (hazard ratio 3.5; P<0.001). CONCLUSIONS: Combined risk stratification with GRACE and CRUSADE scores can improve the individual discriminatory power of GRACE and CRUSADE models in the prediction of all-cause mortality and bleeding. This combined assessment is a practical approach that is potentially advantageous in treatment decision-making.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Resumo:
Although tyrosine kinase inhibitors (TKIs) such as imatinib have transformed chronic myelogenous leukemia (CML) into a chronic condition, these therapies are not curative in the majority of cases. Most patients must continue TKI therapy indefinitely, a requirement that is both expensive and that compromises a patient's quality of life. While TKIs are known to reduce leukemic cells' proliferative capacity and to induce apoptosis, their effects on leukemic stem cells, the immune system, and the microenvironment are not fully understood. A more complete understanding of their global therapeutic effects would help us to identify any limitations of TKI monotherapy and to address these issues through novel combination therapies. Mathematical models are a complementary tool to experimental and clinical data that can provide valuable insights into the underlying mechanisms of TKI therapy. Previous modeling efforts have focused on CML patients who show biphasic and triphasic exponential declines in BCR-ABL ratio during therapy. However, our patient data indicates that many patients treated with TKIs show fluctuations in BCR-ABL ratio yet are able to achieve durable remissions. To investigate these fluctuations, we construct a mathematical model that integrates CML with a patient's autologous immune response to the disease. In our model, we define an immune window, which is an intermediate range of leukemic concentrations that lead to an effective immune response against CML. While small leukemic concentrations provide insufficient stimulus, large leukemic concentrations actively suppress a patient's immune system, thus limiting it's ability to respond. Our patient data and modeling results suggest that at diagnosis, a patient's high leukemic concentration is able to suppress their immune system. TKI therapy drives the leukemic population into the immune window, allowing the patient's immune cells to expand and eventually mount an efficient response against the residual CML. This response drives the leukemic population below the immune window, causing the immune population to contract and allowing the leukemia to partially recover. The leukemia eventually reenters the immune window, thus stimulating a sequence of weaker immune responses as the two populations approach equilibrium. We hypothesize that a patient's autologous immune response to CML may explain the fluctuations in BCR-ABL ratio that are regularly seen during TKI therapy. These fluctuations may serve as a signature of a patient's individual immune response to CML. By applying our modeling framework to patient data, we are able to construct an immune profile that can then be used to propose patient-specific combination therapies aimed at further reducing a patient's leukemic burden. Our characterization of a patient's anti-leukemia immune response may be especially valuable in the study of drug resistance, treatment cessation, and combination therapy.
Resumo:
This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.
Resumo:
Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.