806 resultados para Conservative Design Approach
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
This thesis describes a study conducted for the development of a new approach for the design of compliant mechanisms. Currently compliant mechanisms are based on a 2.5D design method. The applications for which compliant mechanisms can be used this way, is limited. The proposed research suggests to use a 3D approach for the design of CM’s, to better exploit its useful properties. To test the viability of this method, a practical application was chosen. The selected application is related to morphing wings. During this project a working prototype of a variable sweep and variable AoA system was designed and made for an SUAV. A compliant hinge allows the system to achieve two DOF. This hinge has been designed using the proposed 3D design approach. To validate the capabilities of the design, two methods were used. One of these methods was by simulation. By using analysis software, a basic idea could be provided of the stress and deformation of the designed mechanism. The second validation was done by means of AM. Using FDM and material jetting technologies, several prototypes were manufactured. The result of the first model showed that the DOF could be achieved. Models manufactured using material jetting technology, proved that the designed model could provide the desired motion and exploit the positive characteristics of CM. The system could be manufactured successfully in one part. Being able to produce the system in one part makes the need for an extensive assembly process redundant. This improves its structural quality. The materials chosen for the prototypes were PLA, VeroGray and Rigur. The material properties were suboptimal for its final purpose, but successful results were obtained. The prototypes proved tough and were able to provide the desired motion. This proves that the proposed design method can be a useful tool for the design of improved CM’s. Furthermore, the variable sweep & AoA system could be used to boost the flight performance of SUAV’s.
Resumo:
The USP General Chapter < 2040 > Disintegration and Dissolution of Dietary Supplements introduced a rupture test as a performance test of soft-shell capsules. Traditionally, the disintegration test was used for determining the disintegration time of all solid oral dosage forms. The aim of this investigation was to investigate differences between the rupture test and the disintegration test using soft-shell capsules. Five different soft-shell capsule products were chosen based on their filling contents and treated to simulate a production deficiency. The study design compared capsules as received with capsules that were treated by coating them with the liquid contents of another capsule. The capsules were incubated at room temperature and at 40 degrees C. The tests were repeated after two weeks, and at each time point, twelve capsules of each product were tested using the rupture and the disintegration tests. Six capsules were tested untreated, while the other six capsules were treated. Rupture and disintegration times were recorded as dependent variables in each experiment. Thedata were analyzed using ANOVA. According to the USP definition for disintegration, the rupture of a soft-shell capsule can be seen as fulfilling the disintegration criterion if the capsule contents is a semisolid or liquid. Statistical analysis showed no advantage of the rupture test over the disintegration test. On a product-by-product basis, both tests were sensitive to certain investigated parameters. A noticeable difference between both tests was that in most cases, the rupture test reached the defined endpoint faster than the disintegration test. Soft-shell capsules that are subject to a Quality by Design approach should be tested with both methods to determine which performance test is the most appropriate test for a specific product.
Resumo:
This study describes the pedagogical impact of real-world experimental projects undertaken as part of an advanced undergraduate Fluid Mechanics subject at an Australian university. The projects have been organised to complement traditional lectures and introduce students to the challenges of professional design, physical modelling, data collection and analysis. The physical model studies combine experimental, analytical and numerical work in order to develop students’ abilities to tackle real-world problems. A first study illustrates the differences between ideal and real fluid flow force predictions based upon model tests of buildings in a large size wind tunnel used for research and professional testing. A second study introduces the complexity arising from unsteady non-uniform wave loading on a sheltered pile. The teaching initiative is supported by feedback from undergraduate students. The pedagogy of the course and projects is discussed with reference to experiential, project-based and collaborative learning. The practical work complements traditional lectures and tutorials, and provides opportunities which cannot be learnt in the classroom, real or virtual. Student feedback demonstrates a strong interest for the project phases of the course. This was associated with greater motivation for the course, leading in turn to lower failure rates. In terms of learning outcomes, the primary aim is to enable students to deliver a professional report as the final product, where physical model data are compared to ideal-fluid flow calculations and real-fluid flow analyses. Thus the students are exposed to a professional design approach involving a high level of expertise in fluid mechanics, with sufficient academic guidance to achieve carefully defined learning goals, while retaining sufficient flexibility for students to construct there own learning goals. The overall pedagogy is a blend of problem-based and project-based learning, which reflects academic research and professional practice. The assessment is a mix of peer-assessed oral presentations and written reports that aims to maximise student reflection and development. Student feedback indicated a strong motivation for courses that include a well-designed project component.
Resumo:
When the Internet was born, the purpose was to interconnect computers to share digital data at large-scale. On the other hand, when embedded systems were born, the objective was to control system components under real-time constraints through sensing devices, typically at small to medium scales. With the great evolution of the Information and Communication Technology (ICT), the tendency is to enable ubiquitous and pervasive computing to control everything (physical processes and physical objects) anytime and at a large-scale. This new vision gave recently rise to the paradigm of Cyber-Physical Systems (CPS). In this position paper, we provide a realistic vision to the concept of the Cyber-Physical Internet (CPI), discuss its design requirements and present the limitations of the current networking abstractions to fulfill these requirements. We also debate whether it is more productive to adopt a system integration approach or a radical design approach for building large-scale CPS. Finally, we present a sample of realtime challenges that must be considered in the design of the Cyber-Physical Internet.
Resumo:
Existent computer programming training environments help users to learn programming by solving problems from scratch. Nevertheless, initiating the resolution of a program can be frustrating and demotivating if the student does not know where and how to start. Skeleton programming facilitates a top-down design approach, where a partially functional system with complete high level structures is available, so the student needs only to progressively complete or update the code to meet the requirements of the problem. This paper presents CodeSkelGen - a program skeleton generator. CodeSkelGen generates skeleton or buggy Java programs from a complete annotated program solution provided by the teacher. The annotations are formally described within an annotation type and processed by an annotation processor. This processor is responsible for a set of actions ranging from the creation of dummy methods to the exchange of operator types included in the source code. The generator tool will be included in a learning environment that aims to assist teachers in the creation of programming exercises and to help students in their resolution.
Resumo:
Saccharomyces cerevisiae as well as other microorganisms are frequently used in industry with the purpose of obtain different kind of products that can be applied in several areas (research investigation, pharmaceutical compounds, etc.). In order to obtain high yields for the desired product, it is necessary to make an adequate medium supplementation during the growth of the microorganisms. The higher yields are typically reached by using complex media, however the exact formulation of these media is not known. Moreover, it is difficult to control the exact composition of complex media, leading to batch-to-batch variations. So, to overcome this problem, some industries choose to use defined media, with a defined and known chemical composition. However these kind of media, many times, do not reach the same high yields that are obtained by using complex media. In order to obtain similar yield with defined media the addition of many different compounds has to be tested experimentally. Therefore, the industries use a set of empirical methods with which it is tried to formulate defined media that can reach the same high yields as complex media. In this thesis, a defined medium for Saccharomyces cerevisiae was developed using a rational design approach. In this approach a given metabolic network of Saccharomyces cerevisiae is divided into a several unique and not further decomposable sub networks of metabolic reactions that work coherently in steady state, so called elementary flux modes. The EFMtool algorithm was used in order to calculate the EFM’s for two Saccharomyces cerevisiae metabolic networks (amino acids supplemented metabolic network; amino acids non-supplemented metabolic network). For the supplemented metabolic network 1352172 EFM’s were calculated and then divided into: 1306854 EFM’s producing biomass, and 18582 EFM’s exclusively producing CO2 (cellular respiration). For the non-supplemented network 635 EFM’s were calculated and then divided into: 215 EFM’s producing biomass; 420 EFM’s producing exclusively CO2. The EFM’s of each group were normalized by the respective glucose consumption value. After that, the EFMs’ of the supplemented network were grouped again into: 30 clusters for the 1306854 EFMs producing biomass and, 20 clusters for the 18582 EFM’s producing CO2. For the non-supplemented metabolic network the respective EFM’s of each metabolic function were grouped into 10 clusters. After the clustering step, the concentrations of the other medium compounds were calculated by considering a reasonable glucose amount and by accounting for the proportionality between the compounds concentrations and the glucose ratios. The approach adopted/developed in this thesis may allow a faster and more economical way for media development.
Resumo:
Dissertação de mestrado em Design e Marketing
Resumo:
The optimum treatment for prosthetic joint infections has not been clearly defined. We report our experience of the management of acute haematogenous prosthetic joint infection (AHPJI) in patients during a 3-year prospective study in nine Spanish hospitals. Fifty patients, of whom 30 (60%) were female, with a median age of 76 years, were diagnosed with AHPJI. The median infection-free period following joint replacement was 4.9 years. Symptoms were acute in all cases. A distant previous infection and/or bacteraemia were identified in 48%. The aetiology was as follows: Staphylococcus aureus, 19; Streptococcus spp., 14; Gram-negative bacilli, 12; anaerobes, two; and mixed infections, three. Thirty-four (68%) patients were treated with a conservative surgical approach (CSA) with implant retention, and 16 had prosthesis removal. At 2-year follow-up, 24 (48%) were cured, seven (14%) had relapsed, seven (14%) had died, five (10%) had persistent infection, five had re-infection, and two had an unknown evolution. Overall, the treatment failure rates were 57.8% in staphylococcal infections and 14.3% in streptococcal infections. There were no failures in patients with Gram-negative bacillary. By multivariate analysis, CSA was the only factor independently associated with treatment failure (OR 11.6; 95% CI 1.29-104.8). We were unable to identify any factors predicting treatment failure in CSA patients, although a Gram-negative bacillary aetiology was a protective factor. These data suggest that although conservative surgery was the only factor independently associated with treatment failure, it could be the first therapeutic choice for the management of Gram-negative bacillary and streptococcal AHPJI, and for some cases with acute S. aureus infections.
Resumo:
Deterioration in portland cement concrete (PCC) pavements can occur due to distresses caused by a combination of traffic loads and weather conditions. Hot mix asphalt (HMA) overlay is the most commonly used rehabilitation technique for such deteriorated PCC pavements. However, the performance of these HMA overlaid pavements is hindered due to the occurrence of reflective cracking, resulting in significant reduction of pavement serviceability. Various fractured slab techniques, including rubblization, crack and seat, and break and seat are used to minimize reflective cracking by reducing the slab action. However, the design of structural overlay thickness for cracked and seated and rubblized pavements is difficult as the resulting structure is neither a “true” rigid pavement nor a “true” flexible pavement. Existing design methodologies use the empirical procedures based on the AASHO Road Test conducted in 1961. But, the AASHO Road Test did not employ any fractured slab technique, and there are numerous limitations associated with extrapolating its results to HMA overlay thickness design for fractured PCC pavements. The main objective of this project is to develop a mechanistic-empirical (ME) design approach for the HMA overlay thickness design for fractured PCC pavements. In this design procedure, failure criteria such as the tensile strain at the bottom of HMA layer and the vertical compressive strain on the surface of subgrade are used to consider HMA fatigue and subgrade rutting, respectively. The developed ME design system is also implemented in a Visual Basic computer program. A partial validation of the design method with reference to an instrumented trial project (IA-141, Polk County) in Iowa is provided in this report. Tensile strain values at the bottom of the HMA layer collected from the FWD testing at this project site are in agreement with the results obtained from the developed computer program.
Resumo:
Through a rational design approach, we generated a panel of HLA-A*0201/NY-ESO-1(157-165)-specific T cell receptors (TCR) with increasing affinities of up to 150-fold from the wild-type TCR. Using these TCR variants which extend just beyond the natural affinity range, along with an extreme supraphysiologic one having 1400-fold enhanced affinity, and a low-binding one, we sought to determine the effect of TCR binding properties along with cognate peptide concentration on CD8(+) T cell responsiveness. Major histocompatibility complexes (MHC) expressed on the surface of various antigen presenting cells were peptide-pulsed and used to stimulate human CD8(+) T cells expressing the different TCR via lentiviral transduction. At intermediate peptide concentration we measured maximum cytokine/chemokine secretion, cytotoxicity, and Ca(2+) flux for CD8(+) T cells expressing TCR within a dissociation constant (K(D)) range of ∼1-5 μM. Under these same conditions there was a gradual attenuation in activity for supraphysiologic affinity TCR with K(D) < ∼1 μM, irrespective of CD8 co-engagement and of half-life (t(1/2) = ln 2/k(off)) values. With increased peptide concentration, however, the activity levels of CD8(+) T cells expressing supraphysiologic affinity TCR were gradually restored. Together our data support the productive hit rate model of T cell activation arguing that it is not the absolute number of TCR/pMHC complexes formed at equilibrium, but rather their productive turnover, that controls levels of biological activity. Our findings have important implications for various immunotherapies under development such as adoptive cell transfer of TCR-engineered CD8(+) T cells, as well as for peptide vaccination strategies.
Resumo:
To date there have been few investigations of the substructures in low-volume road (LVR) bridges. Steel sheet piling has the potential to provide an economical alternative to concrete bridge abutments, but it needs investigation with regard to vertical and lateral load resistance, construction methods, and performance monitoring. The objectives of this project were to develop a design approach for sheet pile bridge abutments for short-span low-volume bridges, formulate an instrumentation and monitoring plan to evaluate performance of sheet pile abutment systems, and understand the cost and construction effort associated with building the sheet pile bridge abutment demonstration project. Three demonstration projects (Boone, Blackhawk, and Tama Counties) were selected for the design, construction, and monitoring of sheet pile abutments bridges. Each site was unique and required site-specific design and instrumentation monitoring. The key findings from this study include the following: (1) sheet pile abutment bridges provide an effective solution for LVR bridges, (2) the measured stresses and deflection were different from the assumed where the differences reflect conservatism in the design and the complex field conditions, and (3) additional research is needed to optimize the design.
Resumo:
Orientation: Research that considers the effects of individual characteristics and job characteristics jointly in burnout is necessary, especially when one considers the possibility of curvilinear relationships between job characteristics and burnout. Research purpose: This study examines the contribution of sense of coherence (SOC) and job characteristics to predicting burnout by considering direct and moderating effects. Motivation for this study: Understanding the relationships of individual and job characteristics with burnout is necessary for preventing burnout. It also informs the design of interventions. Research design, approach and method: The participants were 632 working adults (57% female) in South Africa. The measures included the Job Content Questionnaire, the Sense of Coherence Questionnaire and the Maslach Burnout Inventory. The authors analysed the data using hierarchical multiple regression with the enter method. Main findings: Job characteristics and SOC show the expected direct effects on burnout. SOC has a direct negative effect on burnout. Job demands and supervisor social support show nonlinear relationships with burnout. SOC moderates the effect of demands on burnout and has a protective function so that the demands-burnout relationship differs for those with high and low SOC. Practical/managerial implications: The types of effects, the shape of the stressor-strain relationship and the different contributions of individual and job characteristics have implications for designing interventions. Contribution/value add: SOC functions differently when combined with demands, control and support. These different effects suggest that it is not merely the presence or absence of a job characteristic that is important for well-being outcomes but how people respond to its presence or absence.
Resumo:
Ohjelmistoteollisuudessa pitkiä ja vaikeita kehityssyklejä voidaan helpottaa käyttämällä hyväksi ohjelmistokehyksiä (frameworks). Ohjelmistokehykset edustavat kokoelmaa luokkia, jotka tarjoavat yleisiä ratkaisuja tietyn ongelmakentän tarpeisiin vapauttaen ohjelmistokehittäjät keskittymään sovelluskohtaisiin vaatimuksiin. Hyvin suunniteltujen ohjelmistokehyksien käyttö lisää suunnitteluratkaisujen sekä lähdekoodin uudelleenkäytettävyyttä enemmän kuin mikään muu suunnittelulähestymistapa. Tietyn kohdealueen tietämys voidaan tallentaa ohjelmistokehyksiin, joista puolestaan voidaan erikoistaa viimeisteltyjä ohjelmistotuotteita. Tässä diplomityössä kuvataan ohjelmistoagentteihin (software agents) perustuvaa ohjelmistokehyksen suunnittelua toteutusta. Pääpaino työssä on vaatimusmäärittelyä vastaavan suunnitelman sekä toteutuksen kuvaaminen ohjelmistokehykselle, josta voidaan erikoistaa erilaiseen tiedonkeruuseen kykeneviä ohjelmistoja Internet ympäristöön. Työn kokeellisessa osuudessa esitellään myös esimerkkisovellus, joka perustuu työssä kehitettyyn ohjelmistokehykseen.