845 resultados para exploratory design methods
Resumo:
Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^
Resumo:
Background
Prostate cancer is one of the most common male cancers worldwide. Active Surveillance (AS) has been developed to allow men with lower risk disease to postpone or avoid the adverse side effects associated with curative treatments until the disease progresses. Despite the medical benefits of AS, it is reported that living with untreated cancer can create a significant emotional burden for patients.
Methods/design
The aim of this study is to gain insight into the experiences of men eligible to undergo AS for favourable-risk PCa.
This study has a mixed-methods sequential explanatory design consisting of two phases: quantitative followed by qualitative. Phase 1 has a multiple point, prospective, longitudinal exploratory design. Ninety men diagnosed with favourable-risk prostate cancer will be assessed immediately post-diagnosis (baseline) and followed over a period of 12 months, in intervals of 3 month. Ninety age-matched men with no cancer diagnosis will also be recruited using peer nomination and followed up in the same 3 month intervals. Following completion of Phase 1, 10–15 AS participants who have reported both the best and worst psychological functioning will be invited to participate in semi-structured qualitative interviews. Phase 2 will facilitate further exploration of the quantitative results and obtain a richer understanding of participants’ personal interpretations of their illness and psychological wellbeing.
Discussion
To our knowledge, this is the first study to utilise early baseline measures; include a healthy comparison group; calculate sample size through power calculations; and use a mixed methods approach to gain a deeper more holistic insight into the experiences of men diagnosed with favourable-risk prostate cancer.
Resumo:
There are many methods for the analysis and design of embedded cantilever retaining walls. They involve various different simplifications of the pressure distribution to allow calculation of the limiting equilibrium retained height and the bending moment when the retained height is less than the limiting equilibrium value, i.e. the serviceability case. Recently, a new method for determining the serviceability earth pressure and bending moment has been proposed. This method makes an assumption defining the point of zero net pressure. This assumption implies that the passive pressure is not fully mobilised immediately below the excavation level. The finite element analyses presented in this paper examine the net pressure distribution on walls in which the retained height is less, than the limiting equilibrium value. The study shows that for all practical walls, the earth pressure distributions on the front and back of the wall are at their limit values, Kp and K-a respectively, when the lumped factor of safety F-r is less than or equal to2.0. A rectilinear net pressure distribution is proposed that is intuitively logical. It produces good predictions of the complete bending moment diagram for walls in the service configuration and the proposed method gives results that have excellent agreement with centrifuge model tests. The study shows that the method for determining the serviceability bending moment suggested by Padfield and Mair(1) in the CIRIA Report 104 gives excellent predictions of the maximum bending moment in practical cantilever walls. It provides the missing data that have been needed to verify and justify the CIRIA 104 method.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.
Resumo:
This paper presents the results from an experimental study of the technical viability of two mixture designs for self-consolidating concrete (SCC) proposed by two Portuguese researchers in a previous work. The objective was to find the best method to provide the required characteristics of SCC in fresh and hardened states without having to experiment with a large number of mixtures. Five SCC mixtures, each with a volume of 25 L (6.61 gal.) were prepared using a forced mixer with a vertical axis for each of three compressive strength targets: 40, 55, and 70 MPa (5.80, 7.98, and 10.15 ksi). The mixtures' fresh state properties of fluidity, segregation resistance ability, and bleeding and blockage tendency, and their hardened state property of compressive strength were compared. For this study, the following tests were performed. slump-flow, V-funnel, L-box, box, and compressive strength. The results of this study made it possible to identify the most influential factors in the design of the SCC mixtures.
Resumo:
Dissertation presented to obtain the Doutoramento (Ph.D.) degree in Biochemistry at the Instituto de Tecnologia Qu mica e Biol ogica da Universidade Nova de Lisboa
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
OBJECTIVE: To describe chronic disease management programs active in Switzerland in 2007, using an exploratory survey. METHODS: We searched the internet (Swiss official websites and Swiss web-pages, using Google), a medical electronic database (Medline), reference lists of pertinent articles, and contacted key informants. Programs met our operational definition of chronic disease management if their interventions targeted a chronic disease, included a multidisciplinary team (>/=2 healthcare professionals), lasted at least six months, and had already been implemented and were active in December 2007. We developed an extraction grid and collected data pertaining to eight domains (patient population, intervention recipient, intervention content, delivery personnel, method of communication, intensity and complexity, environment, clinical outcomes). RESULTS: We identified seven programs fulfilling our operational definition of chronic disease management. Programs targeted patients with diabetes, hypertension, heart failure, obesity, psychosis and breast cancer. Interventions were multifaceted; all included education and half considered planned follow-ups. The recipients of the interventions were patients, and healthcare professionals involved were physicians, nurses, social workers, psychologists and case managers of various backgrounds. CONCLUSIONS: In Switzerland, a country with universal healthcare insurance coverage and little incentive to develop new healthcare strategies, chronic disease management programs are scarce. For future developments, appropriate evaluations of existing programs, involvement of all healthcare stakeholders, strong leadership and political will are, at least, desirable.
Resumo:
State Highway Departments and local street and road agencies are currently faced with aging highway systems and a need to extend the life of some of the pavements. The agency engineer should have the opportunity to explore the use of multiple surface types in the selection of a preferred rehabilitation strategy. This study was designed to look at the portland cement concrete overlay alternative and especially the design of overlays for existing composite (portland cement and asphaltic cement concrete) pavements. Existing design procedures for portland cement concrete overlays deal primarily with an existing asphaltic concrete pavement with an underlying granular base or stabilized base. This study reviewed those design methods and moved to the development of a design for overlays of composite pavements. It deals directly with existing portland cement concrete pavements that have been overlaid with successive asphaltic concrete overlays and are in need of another overlay due to poor performance of the existing surface. The results of this study provide the engineer with a way to use existing deflection technology coupled with materials testing and a combination of existing overlay design methods to determine the design thickness of the portland cement concrete overlay. The design methodology provides guidance for the engineer, from the evaluation of the existing pavement condition through the construction of the overlay. It also provides a structural analysis of various joint and widening patterns on the performance of such designs. This work provides the engineer with a portland cement concrete overlay solution to composite pavements or conventional asphaltic concrete pavements that are in need of surface rehabilitation.
Resumo:
Expansion joints increase both the initial cost and the maintenance cost of bridges. Integral abutment bridges provide an attractive design alternative because expansion joints are eliminated from the bridge itself. However, the piles in these bridges are subjected to horizontal movement as the bridge expands and contracts during temperature changes. The objective of this research was to develop a method of designing piles for these conditions. Separate field tests simulating a pile and a bridge girder were conducted for three loading cases: (1) vertical load only, (2) horizontal displacement of pile head only, and (3) combined horizontal displacement of pile head with subsequent vertical load. Both tests (1) and (3) reached the same ultimate vertical load, that is, the horizontal displacement had no effect on the vertical load capacity. Several model tests were conducted in sand with a scale factor of about 1:10. Experimental results from both the field and model tests were used to develop the vertical and horizontal load-displacement properties of the soil. These properties were input into the finite element computer program Integral Abutment Bridge Two-Dimensional (IAB2D), which was developed under a previous research contract. Experimental and analytical results compared well for the test cases. Two alternative design methods, both based upon the American Association of State Highway and Transportation Officials (AASHTO) Specification, were developed. Alternative One is quite conservative relative to IAB2D results and does not permit plastic redistribution of forces. Alternative Two is also conservative when compared to IAB2D, but plastic redistribution is permitted. To use Alternative Two, the pile cross section must have sufficient inelastic rotation capacity before local buckling occurs. A design example for a friction pile and an end-bearing pile illustrates both alternatives.
Resumo:
Heavy traffic volumes frequently cause distress in asphalt pavements which were designed under accepted design methods and criteria. The distress appears in the form of rutting in the wheel tracks and rippling or shoving in areas where traffic accelerates or decelerates. Apparently accepted stability test methods alone do not always assure the desired service performance of asphaltic pavements under heavy traffic. The Bituminous Research Laboratory, Engineering Research Institute of Iowa State University undertook the development of a laboratory device by which the resistance of an asphalt paving mix to displacement under traffic might be evaluated, and also be used as a supplemental test to determine adequacy of design of the mix by stability procedures.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
The User-centered design (UCD) Gymkhana is a tool for human-computer interaction practitioners to demonstrate through a game the key user-centered design methods and how they interrelate in the design process.The target audiences are other organizational departments unfamiliar with UCD but whose work is related to the definition, cretaion, and update of a product service.
Resumo:
Technology scaling has proceeded into dimensions in which the reliability of manufactured devices is becoming endangered. The reliability decrease is a consequence of physical limitations, relative increase of variations, and decreasing noise margins, among others. A promising solution for bringing the reliability of circuits back to a desired level is the use of design methods which introduce tolerance against possible faults in an integrated circuit. This thesis studies and presents fault tolerance methods for network-onchip (NoC) which is a design paradigm targeted for very large systems-onchip. In a NoC resources, such as processors and memories, are connected to a communication network; comparable to the Internet. Fault tolerance in such a system can be achieved at many abstraction levels. The thesis studies the origin of faults in modern technologies and explains the classification to transient, intermittent and permanent faults. A survey of fault tolerance methods is presented to demonstrate the diversity of available methods. Networks-on-chip are approached by exploring their main design choices: the selection of a topology, routing protocol, and flow control method. Fault tolerance methods for NoCs are studied at different layers of the OSI reference model. The data link layer provides a reliable communication link over a physical channel. Error control coding is an efficient fault tolerance method especially against transient faults at this abstraction level. Error control coding methods suitable for on-chip communication are studied and their implementations presented. Error control coding loses its effectiveness in the presence of intermittent and permanent faults. Therefore, other solutions against them are presented. The introduction of spare wires and split transmissions are shown to provide good tolerance against intermittent and permanent errors and their combination to error control coding is illustrated. At the network layer positioned above the data link layer, fault tolerance can be achieved with the design of fault tolerant network topologies and routing algorithms. Both of these approaches are presented in the thesis together with realizations in the both categories. The thesis concludes that an optimal fault tolerance solution contains carefully co-designed elements from different abstraction levels