900 resultados para Algorithm desigh and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In combination of the advantages of both parallel mechanisms and compliant mechanisms, a compliant parallel mechanism with two rotational DOFs (degrees of freedom) is designed to meet the requirement of a lightweight and compact pan-tilt platform. Firstly, two commonly-used design methods i.e. direct substitution and FACT (Freedom and Constraint Topology) are applied to design the configuration of the pan-tilt system, and similarities and differences of the two design alternatives are compared. Then inverse kinematic analysis of the candidate mechanism is implemented by using the pseudo-rigid-body model (PRBM), and the Jacobian related to its differential kinematics is further derived to help designer realize dynamic analysis of the 8R compliant mechanism. In addition, the mechanism’s maximum stress existing within its workspace is tested by finite element analysis. Finally, a method to determine joint damping of the flexure hinge is presented, which aims at exploring the effect of joint damping on actuator selection and real-time control. To the authors’ knowledge, almost no existing literature concerns with this issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Harnessing solar energy to provide for the thermal needs of buildings is one of the most promising solutions to the global energy issue. Exploiting the additional surface area provided by the building’s façade can significantly increase the solar energy output. Developing a range of integrated and adaptable products that do not significantly affect the building’s aesthetics is vital to enabling the building integrated solar thermal market to expand and prosper. This work reviews and evaluates solar thermal facades in terms of the standard collector type, which they are based on, and their component make-up. Daily efficiency models are presented, based on a combination of the Hottel Whillier Bliss model and finite element simulation. Novel and market available solar thermal systems are also reviewed and evaluated using standard evaluation methods, based on experimentally determined parameters ISO 9806. Solar thermal collectors integrated directly into the facade benefit from the additional wall insulation at the back; displaying higher efficiencies then an identical collector offset from the facade. Unglazed solar thermal facades with high capacitance absorbers (e.g. concrete) experience a shift in peak maximum energy yield and display a lower sensitivity to ambient conditions than the traditional metallic based unglazed collectors. Glazed solar thermal facades, used for high temperature applications (domestic hot water), result in overheating of the building’s interior which can be reduced significantly through the inclusion of high quality wall insulation. For low temperature applications (preheating systems), the cheaper unglazed systems offer the most economic solution. The inclusion of brighter colour for the glazing and darker colour for the absorber shows the lowest efficiency reductions (<4%). Novel solar thermal façade solutions include solar collectors integrated into balcony rails, shading devices, louvers, windows or gutters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An emergency lowering system for use in safety critical crane applications is discussed. The system is used to safely lower the payload of a crane in case of an electric blackout. The system is based on a backup power source, which is used to operate the crane while the regular supply is not available. The system enables both horizontal and vertical movements of the crane. Two different configurations for building the system are described, one with an uninterruptible power source (UPS) or a diesel generator connected in parallel to the crane’s power supply and one with a customized energy storage connected to the intermediate DC-link in the crane. In order to be able to size the backup power source, the power required during emergency lowering needs to be understood. A simulation model is used to study and optimize the power used during emergency lowering. The simulation model and optimizations are verified in a test hoist. Simulation results are presented with non-optimized and optimized controls for two example applications: a paper roll crane and a steel mill ladle crane. The optimizations are found to significantly reduce the required power for the crane movements during emergency lowering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecosystem service assessment and management are shaped by the scale at which they are conducted; however, there has been little systematic investigation of the scales associated with ecosystem service processes, such as production, benefit distribution, and management. We examined how social-ecological spatial scale impacts ecosystem service assessment by comparing how ecosystem service distribution, trade-offs, and bundles shift across spatial scales. We used a case study in Québec, Canada, to analyze the scales of production, consumption, and management of 12 ecosystem services and to analyze how interactions among 7 of these ecosystem services change across 3 scales of observation (1, 9, and 75 km²). We found that ecosystem service patterns and interactions were relatively robust across scales of observation; however, we identified 4 different types of scale mismatches among ecosystem service production, consumption, and management. Based on this analysis, we have proposed 4 aspects of scale that ecosystem service assessments should consider.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze available heat flow data from the flanks of the Southeast Indian Ridge adjacent to or within the Australian-Antarctic Discordance (AAD), an area with patchy sediment cover and highly fractured seafloor as dissected by ridge- and fracture-parallel faults. The data set includes 23 new data points collected along a 14-Ma old isochron and 19 existing measurements from the 20- to 24-Ma old crust. Most sites of measurements exhibit low heat flux (from 2 to 50 mW m(-2)) with near-linear temperature-depth profiles except at a few sites, where recent bottom water temperature change may have caused nonlinearity toward the sediment surface. Because the igneous basement is expected to outcrop a short distance away from any measurement site, we hypothesize that horizontally channelized water circulation within the uppermost crust is the primary process for the widespread low heat flow values. The process may be further influenced by vertical fluid flow along numerous fault zones that crisscross the AAD seafloor. Systematic measurements along and across the fault zones of interest as well as seismic profiling for sediment distribution are required to confirm this possible, suspected effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction of ocean waves, currents and sea bed roughness is a complicated phenomena in fluid dynamic. This paper will describe the governing equations of motions of this phenomena in viscous and nonviscous conditions as well as study and analysis the experimental results of sets of physical models on waves, currents and artificial roughness, and consists of three parts: First, by establishing some typical patterns of roughness, the effects of sea bed roughness on a uniform current has been studied, as well as the manning coefficient of each type is reviewed to find the critical situation due to different arrangement. Second, the effect of roughness on wave parameters changes, such as wave height, wave length, and wave dispersion equations have been studied, third, superimposing, the waves + current + roughness patterns established in a flume, equipped with waves + currents generator, in this stage different analysis has been done to find the governing dimensionless numbers, and present the numbers to define the contortions and formulations of this phenomena. First step of the model is verified by the so called Chinese method, and the Second step by the Kamphius (1975), and third step by the van Rijn (1990) , and Brevik and Ass ( 1980), and in all cases reasonable agreements have been obtained. Finally new dimensionless parameters presented for this complicated phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biochemical agents, including bacteria and toxins, are potentially dangerous and responsible for a wide variety of diseases. Reliable detection and characterization of small samples is necessary in order to reduce and eliminate their harmful consequences. Microcantilever sensors offer a potential alternative to the state of the art due to their small size, fast response time, and the ability to operate in air and liquid environments. At present, there are several technology limitations that inhibit application of microcantilever to biochemical detection and analysis, including difficulties in conducting temperature-sensitive experiments, material inadequacy resulting in insufficient cell capture, and poor selectivity of multiple analytes. This work aims to address several of these issues by introducing microcantilevers having integrated thermal functionality and by introducing nanocrystalline diamond as new material for microcantilevers. Microcantilevers are designed, fabricated, characterized, and used for capture and detection of cells and bacteria. The first microcantilever type described in this work is a silicon cantilever having highly uniform in-plane temperature distribution. The goal is to have 100 μm square uniformly heated area that can be used for thermal characterization of films as well as to conduct chemical reactions with small amounts of material. Fabricated cantilevers can reach above 300C while maintaining temperature uniformity of 2−4%. This is an improvement of over one order of magnitude over currently available cantilevers. The second microcantilever type is a doped single crystal silicon cantilever having a thin coating of ultrananocrystalline diamond (UNCD). The primary application of such a device is in biological testing, where diamond acts as a stable, electrically isolated reaction surface while silicon layer provides controlled heating with minimum variations in temperature. This work shows that composite cantilevers of this kind are an effective platform for temperature-sensitive biological experiments, such as heat lysing and polymerase chain reaction. The rapid heat-transfer of Si-UNCD cantilever compromised the membrane of NIH 3T3 fibroblast and lysed the cell nucleus within 30 seconds. Bacteria cells, Listeria monocytogenes V7, were shown to be captured with biotinylated heat-shock protein on UNCD surface and 90% of all viable cells exhibit membrane porosity due to high heat in 15 seconds. Lastly, a sensor made solely from UNCD diamond is fabricated with the intention of being used to detect the presence of biological species by means of an integrated piezoresistor or through frequency change monitoring. Since UNCD diamond has not been previously used in piezoresistive applications, temperature-denpendent piezoresistive coefficients and gage factors are determined first. The doped UNCD exhibits a significant piezoresistive effect with gauge factor of 7.53±0.32 and a piezoresistive coefficient of 8.12×10^−12 Pa^−1 at room temperature. The piezoresistive properties of UNCD are constant over the temperature range of 25−200C. 300 μm long cantilevers have the highest sensitivity of 0.186 m-Ohm/Ohm per μm of cantilever end deflection, which is approximately half that of similarly sized silicon cantilevers. UNCD cantilever arrays were fabricated consisting of four sixteen-cantilever arrays of length 20–90 μm in addition to an eight-cantilever array of length 120 μm. Laser doppler vibrometry (LDV) measured the cantilever resonant frequency, which ranged as 218 kHz−5.14 MHz in air and 73 kHz−3.68 MHz in water. The quality factor of the cantilever was 47−151 in air and 18−45 in water. The ability to measure frequencies of the cantilever arrays opens the possibility for detection of individual bacteria by monitoring frequency shift after cell capture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modelling of diffusive terms in particle methods is a delicate matter and several models were proposed in the literature to take such terms into account. The diffusion velocity method (DVM), originally designed for the diffusion of passive scalars, turns diffusive terms into convective ones by expressing them as a divergence involving a so-called diffusion velocity. In this paper, DVM is extended to the diffusion of vectorial quantities in the three-dimensional Navier–Stokes equations, in their incompressible, velocity–vorticity formulation. The integration of a large eddy simulation (LES) turbulence model is investigated and a DVM general formulation is proposed. Either with or without LES, a novel expression of the diffusion velocity is derived, which makes it easier to approximate and which highlights the analogy with the original formulation for scalar transport. From this statement, DVM is then analysed in one dimension, both analytically and numerically on test cases to point out its good behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is twofold. Firstly it presents a preliminary and ethnomethodologically-informed analysis of the way in which the growing structure of a particular program's code was ongoingly derived from its earliest stages. This was motivated by an interest in how the detailed structure of completed program `emerged from nothing' as a product of the concrete practices of the programmer within the framework afforded by the language. The analysis is broken down into three sections that discuss: the beginnings of the program's structure; the incremental development of structure; and finally the code productions that constitute the structure and the importance of the programmer's stock of knowledge. The discussion attempts to understand and describe the emerging structure of code rather than focus on generating `requirements' for supporting the production of that structure. Due to time and space constraints, however, only a relatively cursory examination of these features was possible. Secondly the paper presents some thoughts on the difficulties associated with the analytic---in particular ethnographic---study of code, drawing on general problems as well as issues arising from the difficulties and failings encountered as part of the analysis presented in the first section.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planar cell polarity (PCP) occurs in the epithelia of many animals and can lead to the alignment of hairs, bristles and feathers; physiologically, it can organise ciliary beating. Here we present two approaches to modelling this phenomenon. The aim is to discover the basic mechanisms that drive PCP, while keeping the models mathematically tractable. We present a feedback and diffusion model, in which adjacent cell sides of neighbouring cells are coupled by a negative feedback loop and diffusion acts within the cell. This approach can give rise to polarity, but also to period two patterns. Polarisation arises via an instability provided a sufficiently strong feedback and sufficiently weak diffusion. Moreover, we discuss a conservative model in which proteins within a cell are redistributed depending on the amount of proteins in the neighbouring cells, coupled with intracellular diffusion. In this case polarity can arise from weakly polarised initial conditions or via a wave provided the diffusion is weak enough. Both models can overcome small anomalies in the initial conditions. Furthermore, the range of the effects of groups of cells with different properties than the surrounding cells depends on the strength of the initial global cue and the intracellular diffusion.