958 resultados para Implementation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microelectronic systems are multi-material, multi-layer structures, fabricated and exposed to environmental stresses over a wide range of temperatures. Thermal and residual stresses created by thermal mismatches in films and interconnections are a major cause of failure in microelectronic devices. Due to new device materials, increasing die size and the introduction of new materials for enhanced thermal management, differences in thermal expansions of various packaging materials have become exceedingly important and can no longer be neglected. X-ray diffraction is an analytical method using a monochromatic characteristic X-ray beam to characterize the crystal structure of various materials, by measuring the distances between planes in atomic crystalline lattice structures. As a material is strained, this interplanar spacing is correspondingly altered, and this microscopic strain is used to determine the macroscopic strain. This thesis investigates and describes the theory and implementation of X-ray diffraction in the measurement of residual thermal strains. The design of a computer controlled stress attachment stage fully compatible with an Anton Paar heat stage will be detailed. The stress determined by the diffraction method will be compared with bimetallic strip theory and finite element models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents details of the design and development of novel tools and instruments for scanning tunneling microscopy (STM), and may be considered as a repository for several years' worth of development work. The author presents design goals and implementations for two microscopes. First, a novel Pan-type STM was built that could be operated in an ambient environment as a liquid-phase STM. Unique features of this microscope include a unibody frame, for increased microscope rigidity, a novel slider component with large Z-range, a unique wiring scheme and damping mechanism, and a removable liquid cell. The microscope exhibits a high level of mechanical isolation at the tunnel junction, and operates excellently as an ambient tool. Experiments in liquid are on-going. Simultaneously, the author worked on designs for a novel low temperature, ultra-high vacuum (LT-UHV) instrument, and these are presented as well. A novel stick-slip vertical coarse approach motor was designed and built. To gauge the performance of the motor, an in situ motion sensing apparatus was implemented, which could measure the step size of the motor to high precision. A new driving circuit for stick-slip inertial motors is also presented, that o ffers improved performance over our previous driving circuit, at a fraction of the cost. The circuit was shown to increase step size performance by 25%. Finally, a horizontal sample stage was implemented in this microscope. The build of this UHV instrument is currently being fi nalized. In conjunction with the above design projects, the author was involved in a collaborative project characterizing N-heterocyclic carbene (NHC) self-assembled monolayers (SAMs) on Au(111) films. STM was used to characterize Au substrate quality, for both commercial substrates and those manufactured via a unique atomic layer deposition (ALD) process by collaborators. Ambient and UHV STM was then also used to characterize the NHC/Au(111) films themselves, and several key properties of these films are discussed. During this study, the author discovered an unexpected surface contaminant, and details of this are also presented. Finally, two models are presented for the nature of the NHC-Au(111) surface interaction based on the observed film properties, and some preliminary theoretical work by collaborators is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Implementing effective antenatal care models is a key global policy goal. However, the mechanisms of action of these multi-faceted models that would allow widespread implementation are seldom examined and poorly understood. In existing care model analyses there is little distinction between what is done, how it is done, and who does it. A new evidence-informed quality maternal and newborn care (QMNC) framework identifies key characteristics of quality care. This offers the opportunity to identify systematically the characteristics of care delivery that may be generalizable across contexts, thereby enhancing implementation. Our objective was to map the characteristics of antenatal care models tested in Randomised Controlled Trials (RCTs) to a new evidence-based framework for quality maternal and newborn care; thus facilitating the identification of characteristics of effective care.

Methods: A systematic review of RCTs of midwifery-led antenatal care models. Mapping and evaluation of these models’ characteristics to the QMNC framework using data extraction and scoring forms derived from the five framework components. Paired team members independently extracted data and conducted quality assessment using the QMNC framework and standard RCT criteria.

Results: From 13,050 citations initially retrieved we identified 17 RCTs of midwifery-led antenatal care models from Australia (7), the UK (4), China (2), and Sweden, Ireland, Mexico and Canada (1 each). QMNC framework scores ranged from 9 to 25 (possible range 0–32), with most models reporting fewer than half the characteristics associated with quality maternity care. Description of care model characteristics was lacking in many studies, but was better reported for the intervention arms. Organisation of care was the best-described component. Underlying values and philosophy of care were poorly reported.

Conclusions: The QMNC framework facilitates assessment of the characteristics of antenatal care models. It is vital to understand all the characteristics of multi-faceted interventions such as care models; not only what is done but why it is done, by whom, and how this differed from the standard care package. By applying the QMNC framework we have established a foundation for future reports of intervention studies so that the characteristics of individual models can be evaluated, and the impact of any differences appraised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This case study research reports on a small and medium-sized (SME) business-to-business (B2B) services firm implementing a novel new service development (NSD) process. It provides accounts of what occurred in practice in terms of the challenges to NSD process implementation and how the firm overcame these challenges. It also considers the implications for NSD in this and other firms’ innovation practices. This longitudinal case study (18 months) was conducted “inside” the case organization. It covered the entire innovation process from the initiation to the launch of a new service. The primary method may be viewed as participant observation. The research involved all those participating in the innovation system in the firm, including decision-makers, middle managers and employees at lower hierarchical levels and the firm’s external networks. Implications for researchers and managers focusing on structured innovation models for the services sector are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sound localisation is defined as the ability to identify the position of a sound source. The brain employs two cues to achieve this functionality for the horizontal plane, interaural time difference (ITD) by means of neurons in the medial superior olive (MSO) and interaural intensity difference (IID) by neurons of the lateral superior olive (LSO), both located in the superior olivary complex of the auditory pathway. This paper presents spiking neuron architectures of the MSO and LSO. An implementation of the Jeffress model using spiking neurons is presented as a representation of the MSO, while a spiking neuron architecture showing how neurons of the medial nucleus of the trapezoid body interact with LSO neurons to determine the azimuthal angle is discussed. Experimental results to support this work are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis intends to analyse the performance and the efficiency of companies and to identify the key factors that may explain it. A comprehensive analysis based on a set of economic and financial ratios was studied as an instrument which provides information on enterprise performance and its efficiency. It was selected a sample with 15 enterprises: 7 Portuguese and 8 Ukrainian ones, belonging to several industries. Financial and non-financial data was collected for 6 years, during the period of 2009 to 2014. Research questions that guided this work were: Are the enterprises efficient/profitable? What factors influence enterprises’ efficiency/performance? Is there any difference between Ukrainian and Portuguese enterprises’ efficiency/performance, which factors have more influence? Which industrial sector is represented by more efficient/profitable enterprises? The main results showed that in average enterprises were efficient; comparing by states Ukrainian enterprises are more efficient; industries have similar level of efficiency. Among factors that influence ATR positively are fixed and current assets turnover ratios, ROA; negatively influencing are EBITDA margin and liquidity ratio. There is no significant difference between models by country. Concerning profitability, enterprises have low performance level but in comparison of countries Ukrainian enterprises have better profitability in average. Regarding the industry sector, paper industry is the most profitable. Among factors influencing ROA are profit margin, fixed asset turnover ratio, EBITDA margin, Debt to equity ratio and the country. In case of profitability both countries have different models. For Ukrainian enterprises is suggested to pay attention on factors of Short-term debt to total debt, ROA, Interest coverage ratio in order to be more efficient; Profit margin and EBITDA margin to make their performance better. For Portuguese enterprises for improving efficiency the observation and improvement of fixed assets turnover ratio, current assets turnover ratio, Short-term financial debt to total debt, Leverage Ratio, EBITDA margin is suggested; for improving higher profitability track fixed assets turnover ratio, current assets turnover ratio, Debt to equity ratio, Profit margin and Interest coverage ratio is suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the extensive implementation of Superstreets on congested arterials, reliable methodologies for such designs remain unavailable. The purpose of this research is to fill the information gap by offering reliable tools to assist traffic professionals in the design of Superstreets with and without signal control. The entire tool developed in this thesis consists of three models. The first model is used to determine the minimum U-turn offset length for an Un-signalized Superstreet, given the arterial headway distribution of the traffic flows and the distribution of critical gaps among drivers. The second model is designed to estimate the queue size and its variation on each critical link in a signalized Superstreet, based on the given signal plan and the range of observed volumes. Recognizing that the operational performance of a Superstreet cannot be achieved without an effective signal plan, the third model is developed to produce a signal optimization method that can generate progression offsets for heavy arterial flows moving into and out of such an intersection design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past decade, several Australian universities have offered a double degree in nursing and paramedicine. Mainstream employment models that facilitate integrated graduate practice in both nursing and paramedicine are currently lacking. The aim of the present study was to detail the development of the Interprofessional Graduate Program (IPG), the industrial and professional issues that required solutions, outcomes from the first pilot IPG group and future directions. The IPG was an 18-month program during which participants rotated between graduate nursing experience in emergency nursing at Northern Health, Melbourne, Australia and graduate paramedic experience with Ambulance Victoria. The first IPG with 10 participants ran from January 2011 to August 2012. A survey completed by nine of the 10 participants in March 2014 showed that all nine participants nominated Ambulance Victoria as their main employer and five participants were working casual shifts in nursing. Alternative graduate programs that span two health disciplines are feasible but hampered by rigid industrial relations structures and professional ideologies. Despite a 'purpose built' graduate program that spanned two disciplines, traditional organisational structures still hamper double-degree graduates using all of skills to full capacity, and force the selection of one dominant profession.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This review discusses palliative care and end-of-life models of care for Aboriginal people in the Australian state New South Wales, and considers Aboriginal palliative care needs by reflecting on recent literature and lessons derived from Aboriginal consultation. Aboriginal people in Australia account for a very small proportion of the population, have poorer health outcomes and their culture demonstrates a clear resistance to accessing mainstream health services which are viewed as powerful, isolating and not relevant to their culture, way of life, family and belief systems. Aboriginal people regard their land as spiritual and their culture dictates that an Aboriginal person needs to know their origins, emphasising the value placed on kin and also demonstrating a strong desire to remain within their own country. Currently Aboriginal people tend to not access palliative care services in mainstream facilities; and there is very little data on Aboriginal admissions to palliative care centres. Over the last two decades only two models of palliative care focusing on and developed in Aboriginal communities have been implemented. The seminal contribution to Aboriginal Palliative Care was in the form of a resource kit developed to support palliative care providers to examine their practice for cultural appropriateness for Aboriginal and Torres Strait Islanders. The "living model" coming from this project is adaptive and flexible, enabling implementation in different Aboriginal country as a participative process with community input. The Australian government"s National Indigenous Palliative Care Needs Study similarly indicated that Australian empirical research on Aboriginal palliative care service provision is in its infancy, and comprehensive data on the rates of Aboriginal access to palliative care services did not exist. What literature does exist is drawn together in an argument for the development and need for culturally specific Aboriginal palliative care models, which are culturally appropriate, locally accessible and delivered in collaboration and partnership with Aboriginal controlled health services. This is essential because Aboriginal people are a minority cultural group who are disconnected from mainstream health service delivery, and have a sense of cultural isolation when accessing mainstream services. It is preferable that palliative care is delivered in a collaboration between Aboriginal Controlled Health Service and mainstream palliative care services to ensure a dignified end of life for the Aboriginal person. These collaborations and partnerships are fundamental to ensure that a critical mass of Aboriginal clinicians are trained and experienced in end of life care and palliation. Developing palliative care programs within Aboriginal communities and training Aboriginal Health Workers, promoted and developed in partnership with the Aboriginal community, are important strategies to enhance palliative care service provision. Further partnerships should be championed in this collaborative process, acknowledging a need for palliative care models that fit with Aboriginal peoples" community values, beliefs, cultural/ spiritual rituals, heritage and place.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retaining walls are important assets in the transportation infrastructure and assessing their condition is important to prolong their performance and ultimately their design life. Retaining walls are often overlooked and only a few transportation asset management programs consider them in their inventory. Because these programs are few, the techniques used to assess their condition focus on a qualitative assessment as opposed to a quantitative approach. The work presented in this thesis focuses on using photogrammetry to quantitatively assess the condition of retaining walls. Multitemporal photogrammetry is used to develop 3D models of the retaining walls, from which offset displacements are measured to assess their condition. This study presents a case study from a site along M-10 highway in Detroit, MI were several sections of retaining walls have experienced horizontal displacement towards the highway. The results are validated by comparing with field observations and measurements. The limitations of photogrammetry were also studied by using a small scale model in the laboratory. The analysis found that the accuracy of the offset displacement measurements is dependent on the distance between the retaining wall and the sensor, location of the reference points in 3D space, and the focal length of the lenses used by the camera. These parameters were not ideal for the case study at the M-10 highway site, but the results provided consistent trends in the movement of the retaining wall that couldn’t be validated from offset measurements. The findings of this study confirm that photogrammetry shows promise in generating 3D models to provide a quantitative condition assessment for retaining walls within its limitations.