974 resultados para Explanatory Sequential Design
Resumo:
This study adapted the current model of science undergraduate research experiences (URE's) and applied this novel modification to include community college students. Numerous researchers have examined the efficacy of URE's in improving undergraduate retention and graduation rates, as well as matriculation rates for graduate programs. However, none have detailed the experience for community college students, and few have employed qualitative methodologies to gather relevant descriptive data from URE participants. This study included perspectives elicited from both non-traditional student participants and the established laboratory community. The purpose of this study was to determine the effectiveness of the traditional model for a non-traditional student population. The research effort described here utilized a qualitative design and an explanatory case study methodology. Six non-traditional students from the Maine Community College System participated in this study. Student participants were placed in six academic research laboratories located throughout the state. Student participants were interviewed three times during their ten-week internship and asked to record their personal reflections in electronic format. Participants from the established research community were also interviewed. These included both faculty mentors and other student laboratory personnel. Ongoing comparative analysis of the textual data revealed that laboratory organizational structure and social climate significantly influence acculturation outcomes for non-traditional URE participants. Student participants experienced a range of acculturation outcomes from full integration to marginalization. URE acculturation outcomes influenced development of non-traditional students? professional and academic self-concepts. Positive changes in students? self-concepts resulted in greater commitment to individual professional goals and academic aspirations. The findings from this study suggest that traditional science URE models can be successfully adapted to meet the unique needs of a non-traditional student population – community college students. These interpretations may encourage post-secondary educators, administrators, and policy makers to consider expanded access and support for non-traditional students seeking science URE opportunities.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
Major ampullate (MA) dragline silk supports spider orb webs, combining strength and extensibility in the toughest biomaterial. MA silk evolved ~376 MYA and identifying how evolutionary changes in proteins influenced silk mechanics is crucial for biomimetics, but is hindered by high spinning plasticity. We use supercontraction to remove that variation and characterize MA silk across the spider phylogeny. We show that mechanical performance is conserved within, but divergent among, major lineages, evolving in correlation with discrete changes in proteins. Early MA silk tensile strength improved rapidly with the origin of GGX amino acid motifs and increased repetitiveness. Tensile strength then maximized in basal entelegyne spiders, ~230 MYA. Toughness subsequently improved through increased extensibility within orb spiders, coupled with the origin of a novel protein (MaSp2). Key changes in MA silk proteins therefore correlate with the sequential evolution high performance orb spider silk and could aid design of biomimetic fibers.
Resumo:
Molecular analysis of complex modular structures, such as promoter regions or multi-domain proteins, often requires the creation of families of experimental DNA constructs having altered composition, order, or spacing of individual modules. Generally, creation of every individual construct of such a family uses a specific combination of restriction sites. However, convenient sites are not always available and the alternatives, such as chemical resynthesis of the experimental constructs or engineering of different restriction sites onto the ends of DNA fragments, are costly and time consuming. A general cloning strategy (nucleic acid ordered assembly with directionality, NOMAD; WWW resource locator http:@Lmb1.bios.uic.edu/NOMAD/NOMAD.htm l) is proposed that overcomes these limitations. Use of NOMAD ensures that the production of experimental constructs is no longer the rate-limiting step in applications that require combinatorial rearrangement of DNA fragments. NOMAD manipulates DNA fragments in the form of "modules" having a standardized cohesive end structure. Specially designed "assembly vectors" allow for sequential and directional insertion of any number of modules in an arbitrary predetermined order, using the ability of type IIS restriction enzymes to cut DNA outside of their recognition sequences. Studies of regulatory regions in DNA, such as promoters, replication origins, and RNA processing signals, construction of chimeric proteins, and creation of new cloning vehicles, are among the applications that will benefit from using NOMAD.
Resumo:
Applied colorimetry is an important module in the program of the elective subject "Colour Science: industrial applications”. This course is taught in the Optics and Optometry Degree and it has been used as a testing for the application of new teaching and assessment techniques consistent with the new European Higher Education Area. In particular, the main objective was to reduce the attendance to lessons and encourage the individual and collective work of students. The reason for this approach is based on the idea that students are able to work at their own learning pace. Within this dynamic work, we propose online lab practice based on Excel templates that our research group has developed ad-hoc for different aspects of colorimetry, such as conversion to different colour spaces, calculation of perceptual descriptors (hue, saturation, lightness), calculation of colour differences, colour matching dyes, etc. The practice presented in this paper is focused on the learning of colour differences. The session is based on a specific Excel template to compute the colour differences and to plot different graphs with these colour differences defined at different colour spaces: CIE ΔE, CIE ΔE94 and the CIELAB colour space. This template is implemented on a website what works by addressing the student work at a proper and organized way. The aim was to unify all the student work from a website, therefore the student is able to learn in an autonomous and sequential way and in his own pace. To achieve this purpose, all the tools, links and documents are collected for each different proposed activity to achieve guided specific objectives. In the context of educational innovation, this type of website is normally called WebQuest. The design of a WebQuest is established according to the criteria of usability and simplicity. There are great advantages of using WebQuests versus the toolbox “Campus Virtual” available in the University of Alicante. The Campus Virtual is an unfriendly environment for this specific purpose as the activities are organized in different sectors depending on whether the activity is a discussion, an activity, a self-assessment or the download of materials. With this separation, it is more difficult that the student follows an organized sequence. However, our WebQuest provides a more intuitive graphical environment, and besides, all the tasks and resources needed to complete them are grouped and organized according to a linear sequence. In this way, the student guided learning is optimized. Furthermore, with this simplification, the student focuses on learning and not to waste resources. Finally, this tool has a wide set of potential applications: online courses of colorimetry applied for postgraduate students, Open Course Ware, etc.
Resumo:
In this work, we present a systematic method for the optimal development of bioprocesses that relies on the combined use of simulation packages and optimization tools. One of the main advantages of our method is that it allows for the simultaneous optimization of all the individual components of a bioprocess, including the main upstream and downstream units. The design task is mathematically formulated as a mixed-integer dynamic optimization (MIDO) problem, which is solved by a decomposition method that iterates between primal and master sub-problems. The primal dynamic optimization problem optimizes the operating conditions, bioreactor kinetics and equipment sizes, whereas the master levels entails the solution of a tailored mixed-integer linear programming (MILP) model that decides on the values of the integer variables (i.e., number of equipments in parallel and topological decisions). The dynamic optimization primal sub-problems are solved via a sequential approach that integrates the process simulator SuperPro Designer® with an external NLP solver implemented in Matlab®. The capabilities of the proposed methodology are illustrated through its application to a typical fermentation process and to the production of the amino acid L-lysine.
Resumo:
Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature.
Resumo:
Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.
Resumo:
This paper re-examines the stability of multi-input multi-output (MIMO) control systems designed using sequential MIMO quantitative feedback theory (QFT). In order to establish the results, recursive design equations for the SISO equivalent plants employed in a sequential MIMO QFT design are established. The equations apply to sequential MIMO QFT designs in both the direct plant domain, which employs the elements of plant in the design, and the inverse plant domain, which employs the elements of the plant inverse in the design. Stability theorems that employ necessary and sufficient conditions for robust closed-loop internal stability are developed for sequential MIMO QFT designs in both domains. The theorems and design equations facilitate less conservative designs and improved design transparency.
Resumo:
The suspen-dome system is a new structural form that has become popular in the construction of long-span roof structures. These domes are very slender and lightweight, their configuration is complicated, and hence sequential consideration in the structural design is needed. This paper focuses on these considerations, which include the method for designing cable prestress force, a simplified analysis method, and the estimation of buckling capacity. Buckling is one of the most important problems for dome structures. This paper presents the findings of an intensive buckling study of the Lamella suspen-dome system that takes geometric imperfection, asymmetric loading, rise-to-span ratio, and connection rigidity into consideration. Finally, suggested design and construction guidelines are given in the conclusion of this paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
The operation state of photovoltaic Module Integrated Converter (MIC) is subjected to change due to different source and load conditions, while state-swap is usually implemented with flow chart based sequential controller in the past research. In this paper, the signatures for different operational states are evaluated and investigated, which lead to an effective control integrated finite state machine (CIFSM), providing real-time state-swap as fast as the local control loop. The proposed CIFSM is implemented digitally for a boost type MIC prototype and tested under a variety of load and source conditions. The test results prove the effectiveness of the proposed CIFSM design.
Resumo:
Design verification in the digital domain, using model-based principles, is a key research objective to address the industrial requirement for reduced physical testing and prototyping. For complex assemblies, the verification of design and the associated production methods is currently fragmented, prolonged and sub-optimal, as it uses digital and physical verification stages that are deployed in a sequential manner using multiple systems. This paper describes a novel, hybrid design verification methodology that integrates model-based variability analysis with measurement data of assemblies, in order to reduce simulation uncertainty and allow early design verification from the perspective of satisfying key assembly criteria.
Resumo:
PurposeTo develop and validate a classification system for focal vitreomacular traction (VMT) with and without macular hole based on spectral domain optical coherence tomography (SD-OCT), intended to aid in decision-making and prognostication.MethodsA panel of retinal specialists convened to develop this system. A literature review followed by discussion on a wide range of cases formed the basis for the proposed classification. Key features on OCT were identified and analysed for their utility in clinical practice. A final classification was devised based on two sequential, independent validation exercises to improve interobserver variability.ResultsThis classification tool pertains to idiopathic focal VMT assessed by a horizontal line scan using SD-OCT. The system uses width (W), interface features (I), foveal shape (S), retinal pigment epithelial changes (P), elevation of vitreous attachment (E), and inner and outer retinal changes (R) to give the acronym WISPERR. Each category is scored hierarchically. Results from the second independent validation exercise indicated a high level of agreement between graders: intraclass correlation ranged from 0.84 to 0.99 for continuous variables and Fleiss' kappa values ranged from 0.76 to 0.95 for categorical variables.ConclusionsWe present an OCT-based classification system for focal VMT that allows anatomical detail to be scrutinised and scored qualitatively and quantitatively using a simple, pragmatic algorithm, which may be of value in clinical practice as well as in future research studies.