926 resultados para Complex engineering problems
Resumo:
The motion capture is a main tool for quantitative motion analyses. Since the XIX century, several motion caption systems have been developed for biomechanics study, animations, games and movies. The biomechanics and kinesiology involves and depends on knowledge from distinct fields, the engineering and health sciences. A precise human motion analysis requires knowledge from both fields. It is necessary then the use of didactics tools and methods for research and teaching for learning aid. The devices for analysis and motion capture currently that are found on the market and on educational institutes presents difficulties for didactical practice, which are the difficulty of transportation, high cost and limited freedom for the user towards the data acquisition. Therefore, the motion analysis is qualitatively performed or is quantitatively performed in highly complex laboratories. Based is these problems, this work presents the development of a motion capture system for didactic use hence a cheap, light, portable and easily used device with a free software. This design includes the selection of the device, the software development for that and tests. The developed system uses the device Kinect, from Microsoft, for its low cost, low weight, portability and easy use, and delivery tree-dimensional data with only one peripheral device. The proposed programs use the hardware to make motion captures, store them, reproduce them, process the motion data and graphically presents the data.
Resumo:
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Resumo:
Acknowledgements The authors thank the children, their parents and school staff, who participated in this research, and who so willingly gave us their time, help and support. They also thank Steven Knox and Alan Clelland for their work on programming the mobile phone application. Additional thanks to DynaVox Inc. for supplying the Vmax communication devices to run our system on and Sensory Software Ltd for supplying us with their AAC software. This research was supported by the Research Council UKs Digittal Economy Programme and EPSRC (Grant numbers EP/F067151/1, EP/F066880/1, EP/E011764/1, EP/H022376/1, and EP/H022570 /1).
Resumo:
The relationship between research and learning and teaching represents what has been described as amongst the most intellectually tangled, managerially complex and politically contentious issues in mass higher education (Scott, 2005, p 53). Despite this, arguments that in order to achieve high quality scholarly outcomes, university teachers need to adopt an approach to teaching similar to that of research (founded upon academic rigour and evidence), has long been discussed in the literature. However, the practicalities of promoting an empirical and evidence-based approach to teaching in engineering education make dealing with the research / teaching nexus a somewhat challenging proposition. Using a phenomenographic approach, bringing together and applying the findings of a mixed methodological study, the workshop will adopt an activity based, interactive approach to encourage staff to consider the challenges and benefits of adopting an evidence-based approach to learning and teaching through the utilisation of research to inform their own practice. © 2009 Authors.
Resumo:
The organisational decision making environment is complex, and decision makers must deal with uncertainty and ambiguity on a continuous basis. Managing and handling decision problems and implementing a solution, requires an understanding of the complexity of the decision domain to the point where the problem and its complexity, as well as the requirements for supporting decision makers, can be described. Research in the Decision Support Systems domain has been extensive over the last thirty years with an emphasis on the development of further technology and better applications on the one hand, and on the other hand, a social approach focusing on understanding what decision making is about and how developers and users should interact. This research project considers a combined approach that endeavours to understand the thinking behind managers’ decision making, as well as their informational and decisional guidance and decision support requirements. This research utilises a cognitive framework, developed in 1985 by Humphreys and Berkeley that juxtaposes the mental processes and ideas of decision problem definition and problem solution that are developed in tandem through cognitive refinement of the problem, based on the analysis and judgement of the decision maker. The framework facilitates the separation of what is essentially a continuous process, into five distinct levels of abstraction of manager’s thinking, and suggests a structure for the underlying cognitive activities. Alter (2004) argues that decision support provides a richer basis than decision support systems, in both practice and research. The constituent literature on decision support, especially in regard to modern high profile systems, including Business Intelligence and Business analytics, can give the impression that all ‘smart’ organisations utilise decision support and data analytics capabilities for all of their key decision making activities. However this empirical investigation indicates a very different reality.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Recent developments in tailoring the structural and chemical properties of colloidal metal nanoparticles (NPs) have led to significant enhancements in catalyst performance. Controllable colloidal synthesis has also allowed tailor-made NPs to serve as mechanistic probes for catalytic processes. The innovative use of colloidal NPs to gain fundamental insights into catalytic function will be highlighted across a variety of catalytic and electrocatalytic applications. The engineering of future heterogenous catalysts is also moving beyond size, shape and composition considerations. Advancements in understanding structure-property relationships have enabled incorporation of complex features such as tuning surface strain to influence the behavior of catalytic NPs. Exploiting plasmonic properties and altering colloidal surface chemistry through functionalization are also emerging as important areas for rational design of catalytic NPs. This news article will highlight the key developments and challenges to the future design of catalytic NPs.
Resumo:
The real-time optimization of large-scale systems is a difficult problem due to the need for complex models involving uncertain parameters and the high computational cost of solving such problems by a decentralized approach. Extremum-seeking control (ESC) is a model-free real-time optimization technique which can estimate unknown parameters and can optimize nonlinear time-varying systems using only a measurement of the cost function to be minimized. In this thesis, we develop a distributed version of extremum-seeking control which allows large-scale systems to be optimized without models and with minimal computing power. First, we develop a continuous-time distributed extremum-seeking controller. It has three main components: consensus, parameter estimation, and optimization. The consensus provides each local controller with an estimate of the cost to be minimized, allowing them to coordinate their actions. Using this cost estimate, parameters for a local input-output model are estimated, and the cost is minimized by following a gradient descent based on the estimate of the gradient. Next, a similar distributed extremum-seeking controller is developed in discrete-time. Finally, we consider an interesting application of distributed ESC: formation control of high-altitude balloons for high-speed wireless internet. These balloons must be steered into a favourable formation where they are spread out over the Earth and provide coverage to the entire planet. Distributed ESC is applied to this problem, and is shown to be effective for a system of 1200 ballons subjected to realistic wind currents. The approach does not require a wind model and uses a cost function based on a Voronoi partition of the sphere. Distributed ESC is able to steer balloons from a few initial launch sites into a formation which provides coverage to the entire Earth and can maintain a similar formation as the balloons move with the wind around the Earth.
Resumo:
El objetivo del artículo es realizar un diagnóstico sobre la percepción de los factores que intervienen en el rendimiento académico de los estudiantes de cinco carreras universitarias en una escuela de educación superior en México, para así reconocer las áreas de oportunidad que permitan sugerir políticas y estrategias para elevar su rendimiento. Se utilizó una muestra de 1651 estudiantes, se obtuvieron los datos a partir de un cuestionario con treinta preguntas que estudian la percepción del rendimiento académico en escala tipo Likert. Se realizó un análisis factorial exploratorio que permitiera reducir los datos, facilitar la interpretación y validar el instrumento. Se identificaron tres factores: a) el rol de los profesores, b) la evaluación y c) la motivación de los estudiantes. Se llevó a cabo un análisis comparativo por carrera. Se encontró que los estudiantes perciben que la mayoría de los maestros no se preocupan por la condición de los jóvenes en situación de reprobación. Además, casi no motivan y carecen de expresiones de sentimientos de orgullo por los logros académicos de los estudiantes. La mitad de los participantes piensa que los docentes no cubren el temario en su totalidad. Se detectó que los estudiantes poseen una alta motivación siendo esto positivo porque son alumnos dedicados y responsables. Se concluye realizando una serie de sugerencias y explicando las implicaciones que tiene este trabajo para las instituciones de educación superior.
Resumo:
Electrospun nanofibers are a promising material for ligamentous tissue engineering, however weak mechanical properties of fibers to date have limited their clinical usage. The goal of this work was to modify electrospun nanofibers to create a robust structure that mimics the complex hierarchy of native tendons and ligaments. The scaffolds that were fabricated in this study consisted of either random or aligned nanofibers in flat sheets or rolled nanofiber bundles that mimic the size scale of fascicle units in primarily tensile load bearing soft musculoskeletal tissues. Altering nanofiber orientation and geometry significantly affected mechanical properties; most notably aligned nanofiber sheets had the greatest modulus; 125% higher than that of random nanofiber sheets; and 45% higher than aligned nanofiber bundles. Modifying aligned nanofiber sheets to form aligned nanofiber bundles also resulted in approximately 107% higher yield stresses and 140% higher yield strains. The mechanical properties of aligned nanofiber bundles were in the range of the mechanical properties of the native ACL: modulus=158±32MPa, yield stress=57±23MPa and yield strain=0.38±0.08. Adipose derived stem cells cultured on all surfaces remained viable and proliferated extensively over a 7 day culture period and cells elongated on nanofiber bundles. The results of the study suggest that aligned nanofiber bundles may be useful for ligament and tendon tissue engineering based on their mechanical properties and ability to support cell adhesion, proliferation, and elongation.
Resumo:
For a structural engineer, effective communication and interaction with architects cannot be underestimated as a key skill to success throughout their professional career. Structural engineers and architects have to share a common language and understanding of each other in order to achieve the most desirable architectural and structural designs. This interaction and engagement develops during their professional career but needs to be nurtured during their undergraduate studies. The objective of this paper is to present the strategies employed to engage higher order thinking in structural engineering students in order to help them solve complex problem-based learning (PBL) design scenarios presented by architecture students. The strategies employed were applied in the experimental setting of an undergraduate module in structural engineering at Queen’s University Belfast in the UK. The strategies employed were active learning to engage with content knowledge, the use of physical conceptual structural models to reinforce key concepts and finally, reinforcing the need for hand sketching of ideas to promote higher order problem-solving. The strategies employed were evaluated through student survey, student feedback and module facilitator (this author) reflection. The strategies were qualitatively perceived by the tutor and quantitatively evaluated by students in a cross-sectional study to help interaction with the architecture students, aid interdisciplinary learning and help students creatively solve problems (through higher order thinking). The students clearly enjoyed this module and in particular interacting with structural engineering tutors and students from another discipline
Resumo:
It has become increasingly common for tasks traditionally carried out by engineers to be undertaken by technicians and technologist with access to sophisticated computers and software that can often perform complex calculations that were previously the responsibility of engineers. Not surprisingly, this development raises serious questions about the future role of engineers and the education needed to address these changes in technology as well as emerging priorities from societal to environmental challenges. In response to these challenges, a new design module was created for undergraduate engineering students to design and build temporary shelters for a wide variety of end users from refugees, to the homeless and children. Even though the module provided guidance on principles of design thinking and methods for observing users needs through field studies, the students found it difficult to respond to needs of specific end users but instead focused more on purely technical issues.
Resumo:
The mammalian binaural cue of interaural time difference (ITD) and cross-correlation have long been used to determine the point of origin of a sound source. The ITD can be defined as the different points in time at which a sound from a single location arrives at each individual ear [1]. From this time difference, the brain can calculate the angle of the sound source in relation to the head [2]. Cross-correlation compares the similarity of each channel of a binaural waveform producing the time lag or offset required for both channels to be in phase with one another. This offset corresponds to the maximum value produced by the cross-correlation function and can be used to determine the ITD and thus the azimuthal angle θ of the original sound source. However, in indoor environments, cross-correlation has been known to have problems with both sound reflections and reverberations. Additionally, cross-correlation has difficulties with localising short-term complex noises when they occur during a longer duration waveform, i.e. in the presence of background noise. The crosscorrelation algorithm processes the entire waveform and the short-term complex noise can be ignored. This paper presents a technique using thresholding which enables higher-localisation abilities for short-term complex sounds in the midst of background noise. To determine the success of this thresholding technique, twenty-five sounds were recorded in a dynamic and echoic environment. The twenty-five sounds consist of hand-claps, finger-clicks and speech. The proposed technique was compared to the regular cross-correlation function for the same waveforms, and an average of the azimuthal angles determined for each individual sample. The sound localisation ability for all twenty-five sound samples is as follows: average of the sampled angles using cross-correlation: 44%; cross-correlation technique with thresholding: 84%. From these results, it is clear that this proposed technique is very successful for the localisation of short-term complex sounds in the midst of background noise and in a dynamic and echoic indoor environment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06