929 resultados para Simplified design method
Resumo:
This thesis deals with a novel control approach based on the extension of the well-known Internal Model Principle to the case of periodic switched linear exosystems. This extension, inspired by power electronics applications, aims to provide an effective design method to robustly achieve the asymptotic tracking of periodic references with an infinite number of harmonics. In the first part of the thesis the basic components of the novel control scheme are described and preliminary results on stabilization are provided. In the second part, advanced control methods for two applications coming from the world high energy physics are presented.
Resumo:
This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.
Resumo:
A new type of pavement has been gaining popularity over the last few years in Europe. It comprises a surface course with a semi-flexible material that provides significant advantages in comparison to both concrete and conventional asphalt, having both rut resistance and a degree of flexibility. It also provides good protection against the ingress of water to the foundation, since it has an impermeable surface. The semi-flexible material, generally known as grouted macadam, comprises an open-graded asphalt skeleton with 25% to 35% voids into which a cementitious slurry is grouted. This hybrid mixture provides good rut resistance and a surface highly resistant to fuel and oil spillage. Such properties allow it to be used in industrial areas, airports and harbours, where those situations are frequently associated with heavy and slow traffic. Grouted Macadams constitute a poorly understood branch of pavement technology and have generally been relegated to a role in certain specialist pavements whose performance is predicted on purely empirical evidence. Therefore, the main objectives of this project were related to better understanding the properties of this type of material, in order to predict its performance more realistically and to design pavements incorporating grouted macadam more accurately. Based on a standard mix design, several variables were studied during this project in order to characterise the behaviour of Grouted Macadams in general, and the influence of those variables on the fundamental properties of the final mixture. In this research project, one approach was used to the design of pavements incorporating Grouted Macadams: a traditional design method, based on laboratory determined of the stiffness modulus and the compressive strength.
Resumo:
La ricerca inquadra all’interno dell’opera dell’autore, lo specifico tema della residenza. Esso costituisce il campo di applicazione del progetto di architettura, in cui più efficacemente ricercare i tratti caratteristici del metodo progettuale dell’architetto, chiave di lettura dello studio proposto. Il processo che giunge alla costituzione materiale dell’architettura, viene considerato nelle fasi in cui è scomposto, negli strumenti che adotta, negli obbiettivi che si pone, nel rapporto con i sistemi produttivi, per come affronta il tema della forma e del programma e confrontato con la vasta letteratura presente nel pensiero di alcuni autori vicini a Ignazio Gardella. Si definiscono in tal modo i tratti di una metodologia fortemente connotata dal realismo, che rende coerente una ricerca empirica e razionale, legata ad un’idea di architettura classica, di matrice illuministica e attenta alle istanze della modernità, all’interno della quale si realizza l’eteronomia linguistica che caratterizza uno dei tratti caratteristici delle architetture di Ignazio Gardella; aspetto più volte interpretato come appartenenza ai movimenti del novecento, che intersecano costantemente la lunga carriera dell’architetto. L’analisi dell’opera della residenza è condotta non per casi esemplari, ma sulla totalità dei progetti che si avvale anche di contributi inediti. Essa è intesa come percorso di ricerca personale sui processi compositivi e sull’uso del linguaggio e permette un riposizionamento della figura di Gardella, in relazione al farsi dell’architettura, della sua realizzazione e non alla volontà di assecondare stili o norme a-priori. E’ la dimensione pratica, del mestiere, quella che meglio si presta all’interpretazione dei progetti di Gardella. Le residenze dell’architetto si mostrano per la capacità di adattarsi ai vincoli del luogo, del committente, della tecnologia, attraverso la re-interpretazione formale e il trasferimento da un tema all’altro, degli elementi essenziali che mostrano attraverso la loro immagine, una precisa idea di casa e di architettura, non autoriale, ma riconoscibile e a-temporale.
Resumo:
A comparison between main design methods for unpaved roads is presented in this paper. An unpaved road is made up of an unbound aggregate base course lying on a usually weak subgrade. A geosynthetic might be put between the two in reinforcing and separating function. The goal of a design method is to find the appropriate thickness of the base course knowing at least traffic volume, wheel load, tire pressure, undrained cohesion of the subgrade, allowable rut depth and influence of the reinforcement. Geosynthetics can reduce the thickness or the quality of aggregate required and improve the durability of an unpaved road. Geotextiles contribute to save aggregate through interaction friction and separation, while geogrids through interlocking between his apertures and lithic base elements. In the last chapter a case study is discussed and design thicknesses with two design methods for the three possible cases (i.e. unreinforced, geotextile reinforced, geogrid reinforced) are calculated.
Resumo:
BACKGROUND Students frequently hold a number of misconceptions related to temperature, heat and energy. There is not currently a concept inventory with sufficiently high internal reliability to assess these concept areas for research purposes. Consequently, there is little data on the prevalence of these misconceptions amongst undergraduate engineering students. PURPOSE (HYPOTHESIS) This work presents the Heat and Energy Concept Inventory (HECI) to assess prevalent misconceptions related to: (1) Temperature vs. Energy, (2) Temperature vs. Perceptions of Hot and Cold, (3) Factors that affect the Rate vs. Amount of Heat Transfer and (4) Thermal Radiation. The HECI is also used to document the prevalence of misconceptions amongst undergraduate engineering students. DESIGN/METHOD Item analysis, guided by classical test theory, was used to refine individual questions on the HECI. The HECI was used in a one group, pre-test-post-test design to assess the prevalence and persistence of targeted misconceptions amongst a population of undergraduate engineering students at diverse institutions. RESULTS Internal consistency reliability was assessed using Kuder-Richardson Formula 20; values were 0.85 for the entire instrument and ranged from 0.59 to 0.76 for the four subcategories of the HECI. Student performance on the HECI went from 49.2% to 54.5% after instruction. Gains on each of the individual subscales of the HECI, while generally statistically significant, were similarly modest. CONCLUSIONS The HECI provides sufficiently high estimates of internal consistency reliability to be used as a research tool to assess students' understanding of the targeted concepts. Use of the instrument demonstrates that student misconceptions are both prevalent and resistant to change through standard instruction.
Resumo:
Background Increasing attention is being paid to improvement in undergraduate science, technology, engineering, and mathematics (STEM) education through increased adoption of research-based instructional strategies (RBIS), but high-quality measures of faculty instructional practice do not exist to monitor progress. Purpose/Hypothesis The measure of how well an implemented intervention follows the original is called fidelity of implementation. This theory was used to address the research questions: What is the fidelity of implementation of selected RBIS in engineering science courses? That is, how closely does engineering science classroom practice reflect the intentions of the original developers? Do the critical components that characterize an RBIS discriminate between engineering science faculty members who claimed use of the RBIS and those who did not? Design/Method A survey of 387 U.S. faculty teaching engineering science courses (e.g., statics, circuits, thermodynamics) included questions about class time spent on 16 critical components and use of 11 corresponding RBIS. Fidelity was quantified as the percentage of RBIS users who also spent time on corresponding critical components. Discrimination between users and nonusers was tested using chi square. Results Overall fidelity of the 11 RBIS ranged from 11% to 80% of users spending time on all required components. Fidelity was highest for RBIS with one required component: case-based teaching, just-in-time teaching, and inquiry learning. Thirteen of 16 critical components discriminated between users and nonusers for all RBIS to which they were mapped. Conclusions Results were consistent with initial mapping of critical components to RBIS. Fidelity of implementation is a potentially useful framework for future work in STEM undergraduate education.
Resumo:
OBJECTIVE: To assess the memory of various subdimensions of the birth experience in the second year postpartum, and to identify women in the first weeks postpartum at risk of developing a long-term negative memory. DESIGN, METHOD, OUTCOME MEASURES: New mothers' birth experience (BE) was assessed 48-96 hours postpartum (T1) by means of the SIL-Ger and the BBCI (perception of intranatal relationships); early postnatal adjustment (week 3 pp: T1(bis)) was also assessed. Then, four subgroups of women were defined by means of a cluster-analysis, integrating the T1/T1(bis) variables. To evaluate the memory of the BE, the SIL-Ger was again applied in the second year after childbirth (T2). First, the ratings of the SIL-Ger dimensions of T1 were compared to those at T2 in the whole sample. Then, the four subgroups were compared with respect to their ratings of the birth experience at T2 (correlations, ANOVAs and t-tests). RESULTS: In general, fulfillment, emotional adaptation, physical discomfort, and anxiety improve spontaneously over the first year postpartum, whereas in negative emotional experience, control, and time-going-slowly no shift over time is observed. However, women with a negative overall birth experience and a low level of perceived intranatal relationship at T1 run a high risk of retaining a negative memory in all of the seven subdimensions of the birth experience. CONCLUSIONS: Women at risk of developing a negative long-term memory of the BE can be identified at the time of early postpartum, when the overall birth experience and the perceived intranatal relationship are taken into account.
Resumo:
INTRODUCTION The orthographic depth hypothesis (Katz and Feldman, 1983) posits that different reading routes are engaged depending on the type of grapheme/phoneme correspondence of the language being read. Shallow orthographies with consistent grapheme/phoneme correspondences favor encoding via non-lexical pathways, where each grapheme is sequentially mapped to its corresponding phoneme. In contrast, deep orthographies with inconsistent grapheme/phoneme correspondences favor lexical pathways, where phonemes are retrieved from specialized memory structures. This hypothesis, however, lacks compelling empirical support. The aim of the present study was to investigate the impact of orthographic depth on reading route selection using a within-subject design. METHOD We presented the same pseudowords (PWs) to highly proficient bilinguals and manipulated the orthographic depth of PW reading by embedding them among two separated German or French language contexts, implicating respectively, shallow or deep orthography. High density electroencephalography was recorded during the task. RESULTS The topography of the ERPs to identical PWs differed 300-360 ms post-stimulus onset when the PWs were read in different orthographic depth context, indicating distinct brain networks engaged in reading during this time window. The brain sources underlying these topographic effects were located within left inferior frontal (German > French), parietal (French > German) and cingular areas (German > French). CONCLUSION Reading in a shallow context favors non-lexical pathways, reflected in a stronger engagement of frontal phonological areas in the shallow versus the deep orthographic context. In contrast, reading PW in a deep orthographic context recruits less routine non-lexical pathways, reflected in a stronger engagement of visuo-attentional parietal areas in the deep versus shallow orthographic context. These collective results support a modulation of reading route by orthographic depth.
Resumo:
Objectives: Athletes differ at staying focused on performance and avoiding distraction. Drawing on the strength model of self-control we investigated whether athletes do not only differ inter-individually in their disposition of staying focused and avoiding distraction but also intra-individually in their situational availability of focused attention. Design/method: In the present experiment we hypothesized that basketball players (N = 40) who have sufficient self-control resources will perform relatively better on a computer based decision making task under distraction conditions compared to a group who's self-control resources have been depleted in a prior task requiring self-control. Results: The results are in line with the strength model of self-control by demonstrating that an athlete's capability to focus attention relies on the situational availability of self-control strength. Conclusions: The current results indicate that having sufficient self-control strength in interference rich sport settings is likely to be beneficial for decision making.
Resumo:
Objectives: It has been repeatedly demonstrated that athletes in a state of ego depletion do not perform up to their capabilities in high pressure situations. We assume that momentarily available self-control strength determines whether individuals in high pressure situations can resist distracting stimuli. Design/method: In the present study, we applied a between-subjects design, as 31 experienced basketball players were randomly assigned to a depletion group or a non-depletion group. Participants performed 30 free throws while listening to statements representing worrisome thoughts (as frequently experienced in high pressure situations) over stereo headphones. Participants were instructed to block out these distracting audio messages and focus on the free throws. We postulated that depleted participants would be more likely to be distracted. They were also assumed to perform worse in the free throw task. Results: The results supported our assumption as depleted participants paid more attention to the distracting stimuli. In addition, they displayed worse performance in the free throw task. Conclusions: These results indicate that sufficient levels of self-control strength can serve as a buffer against distracting stimuli under pressure.
Resumo:
Hospital districts (HD) that serve the uninsured and the needy face new challenges with the implementation of Medicaid managed. The potential loss of Medicaid patients and revenues may affect the ability to cost-shift and subsequently decrease the ability of the HD to meet its legal obligation of providing care for the uninsured. ^ To investigate HD viability in the current market, the aims of this study were to: (1) describe HD's environment, (2) document the HDs strategic response, (3) document changes in the HD's performance (patient volume) and financial status, and (4) determine whether relationships or trends exist between HD strategy, performance and financial status. ^ To achieve these aims, three Texas HDs (Fort Worth, Lubbock, and San Antonio) were selected to be evaluated. For each HD four types of strategic responses were documented and evaluated for change. In addition, the ability of each HD to sustain operations was evaluated by documenting performance and financial status changes (patient volume and financial ratios). A pre-post case study design method was used in which the Medicaid managed care “rollout'” date, at each site, was the central date. First, a descriptive analysis was performed which documented the environment, strategy, financial status, and patient volume of each hospital district. Second, to compare hospital districts, each hospital district was: (i) classified by a risk index, (ii) classified by its strategic response profile, and (iii) given a performance score based upon pre-post changes in patient volume and financial indicators. ^ Results indicated that all three HDs operate in a high risk environment compared to the rest of the nation. Two HDs chose the “Status Quo” response whereas one HD chose the “Competitive Proactive” response. Medicaid patient volume decreased in two of three HDs whereas indigent patient volume increased in two of the three (an indication of increasing financial risk). Total patient revenues for all HDs increased over the study period; however, the rate of increase slowed for all three after the Medicaid rollout date. All HDs experienced a decline in financial status between pre-post periods with the greatest decline observed in the HD that saw the greatest increase in indigent patient volume. ^ The pre-post case study format used and the lack of control study sites do not allow for assignment of causality. However, the results suggest possible adverse effects of Medicaid managed care and the need for a larger study, based on a stronger evaluation research design. ^
Resumo:
A recent study by Rozvany and Sokól discussed an important topic in structural design: the allowance for support costs in the optimization process. This paper examines a frequently used kind of support —that of simple foundation with horizontal reaction by friction— that appears no covered for the Authors’ approach. A simple example is examined to illustrate the case and to apply the Authors’ method and the standard design method.
Resumo:
This paper presents a high-power high efficiency PA design method using load pull technique. Harmonic impedance control at the virtual drain is accomplished through the use of tunable pre-matching circuits and modeling of package parasitics. A 0.5 µm GaN high electron mobility transistor (HEMT) is characterized using the method, and loadpull measurements are simulated illustrating the impact of varying 2nd and 3rd harmonic termination. These harmonic terminations are added to satisfy conditions for class-F load pull. The method is verified by design and simulation of a 40-W class-F PA prototype at 1.64 GHz with 76% drain efficiency and 10 dB gain (70% PAE).
Resumo:
A new free-form optics design method could unleash the full potential of tracking integrated solar concentrators.