908 resultados para Case method
Resumo:
This survey study aimed at identifying the factors influencing the success of animal husbandry cooperatives in Southwest Iran. Using a questionnaire, the data were collected from 95 managing directors of the cooperatives who were chosen through a multi-stage stratified random sampling method. This study showed an essential need for a systemic framework to analyze the cooperatives’ success. The results showed that the “Honey Bee”, “Cattle (dairy)”, and “Lamb” cooperatives were the most successful among different kinds of the cooperatives. Also, among individual attributes, “interest”, “technical knowledge”, and “understanding the concept of cooperative”; among economic variables, “income” and “current investment”; and among external factors, “market access” have significant correlation with the success while structural variables have no significant relation. Furthermore, among all the factors, four variables (“interest”, “understanding the concept of cooperative”, “market access”, and “other incomes”) can explain the variations of the success.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.
Resumo:
This work presents detailed numerical calculations of the dielectrophoretic force in octupolar traps designed for single-cell trapping. A trap with eight planar electrodes is studied for spherical and ellipsoidal particles using an indirect implementation of the boundary element method (BEM). Multipolar approximations of orders one to three are compared with the full Maxwell stress tensor (MST) calculation of the electrical force on spherical particles. Ellipsoidal particles are also studied, but in their case only the dipolar approximation is available for comparison with the MST solution. The results show that the full MST calculation is only required in the study of non-spherical particles.
Resumo:
El artículo busca encontrar evidencia empírica de los determinantes de la salud, como una medición de capital salud en un país en desarrollo después de una profunda reforma en el sector salud. Siguiendo el modelo de Grossman (1972) y tomando factores institucionales, además de las variables individuales y socioeconómicas. Se usaron las encuestas de 1997 y 2000 donde se responde subjetivamente sobre el estado de salud y tipo de afiliación al sistema de salud. El proceso de estimación usado es un probit ordenado. Los resultados muestran una importante conexión entre las variables individuales, institucionales y socioeconómicas con el estado de salud. El efecto de tipo de acceso al sistema de salud presiona las inequidades en salud.
Resumo:
Case-crossover is one of the most used designs for analyzing the health-related effects of air pollution. Nevertheless, no one has reviewed its application and methodology in this context. Objective: We conducted a systematic review of case-crossover (CCO) designs used to study the relationship between air pollution and morbidity and mortality, from the standpoint of methodology and application.Data sources and extraction: A search was made of the MEDLINE and EMBASE databases.Reports were classified as methodologic or applied. From the latter, the following information was extracted: author, study location, year, type of population (general or patients), dependent variable(s), independent variable(s), type of CCO design, and whether effect modification was analyzed for variables at the individual level. Data synthesis: The review covered 105 reports that fulfilled the inclusion criteria. Of these, 24 addressed methodological aspects, and the remainder involved the design’s application. In the methodological reports, the designs that yielded the best results in simulation were symmetric bidirectional CCO and time-stratified CCO. Furthermore, we observed an increase across time in the use of certain CCO designs, mainly symmetric bidirectional and time-stratified CCO. The dependent variables most frequently analyzed were those relating to hospital morbidity; the pollutants most often studied were those linked to particulate matter. Among the CCO-application reports, 13.6% studied effect modification for variables at the individual level.Conclusions: The use of CCO designs has undergone considerable growth; the most widely used designs were those that yielded better results in simulation studies: symmetric bidirectional and time-stratified CCO. However, the advantages of CCO as a method of analysis of variables at the individual level are put to little use
Resumo:
El test de circuits és una fase del procés de producció que cada vegada pren més importància quan es desenvolupa un nou producte. Les tècniques de test i diagnosi per a circuits digitals han estat desenvolupades i automatitzades amb èxit, mentre que aquest no és encara el cas dels circuits analògics. D'entre tots els mètodes proposats per diagnosticar circuits analògics els més utilitzats són els diccionaris de falles. En aquesta tesi se'n descriuen alguns, tot analitzant-ne els seus avantatges i inconvenients. Durant aquests últims anys, les tècniques d'Intel·ligència Artificial han esdevingut un dels camps de recerca més importants per a la diagnosi de falles. Aquesta tesi desenvolupa dues d'aquestes tècniques per tal de cobrir algunes de les mancances que presenten els diccionaris de falles. La primera proposta es basa en construir un sistema fuzzy com a eina per identificar. Els resultats obtinguts son força bons, ja que s'aconsegueix localitzar la falla en un elevat tant percent dels casos. Per altra banda, el percentatge d'encerts no és prou bo quan a més a més s'intenta esbrinar la desviació. Com que els diccionaris de falles es poden veure com una aproximació simplificada al Raonament Basat en Casos (CBR), la segona proposta fa una extensió dels diccionaris de falles cap a un sistema CBR. El propòsit no és donar una solució general del problema sinó contribuir amb una nova metodologia. Aquesta consisteix en millorar la diagnosis dels diccionaris de falles mitjançant l'addició i l'adaptació dels nous casos per tal d'esdevenir un sistema de Raonament Basat en Casos. Es descriu l'estructura de la base de casos així com les tasques d'extracció, de reutilització, de revisió i de retenció, fent èmfasi al procés d'aprenentatge. En el transcurs del text s'utilitzen diversos circuits per mostrar exemples dels mètodes de test descrits, però en particular el filtre biquadràtic és l'utilitzat per provar les metodologies plantejades, ja que és un dels benchmarks proposats en el context dels circuits analògics. Les falles considerades son paramètriques, permanents, independents i simples, encara que la metodologia pot ser fàcilment extrapolable per a la diagnosi de falles múltiples i catastròfiques. El mètode es centra en el test dels components passius, encara que també es podria extendre per a falles en els actius.
Resumo:
La comunitat científica que treballa en Intel·ligència Artificial (IA) ha dut a terme una gran quantitat de treball en com la IA pot ajudar a les persones a trobar el que volen dins d'Internet. La idea dels sistemes recomanadors ha estat extensament acceptada pels usuaris. La tasca principal d'un sistema recomanador és localitzar ítems, fonts d'informació i persones relacionades amb els interessos i preferències d'una persona o d'un grup de persones. Això comporta la construcció de models d'usuari i l'habilitat d'anticipar i predir les preferències de l'usuari. Aquesta tesi està focalitzada en l'estudi de tècniques d'IA que millorin el rendiment dels sistemes recomanadors. Inicialment, s'ha dut a terme un anàlisis detallat de l'actual estat de l'art en aquest camp. Aquest treball ha estat organitzat en forma de taxonomia on els sistemes recomanadors existents a Internet es classifiquen en 8 dimensions generals. Aquesta taxonomia ens aporta una base de coneixement indispensable pel disseny de la nostra proposta. El raonament basat en casos (CBR) és un paradigma per aprendre i raonar a partir de la experiència adequat per sistemes recomanadors degut als seus fonaments en el raonament humà. Aquesta tesi planteja una nova proposta de CBR aplicat al camp de la recomanació i un mecanisme d'oblit per perfils basats en casos que controla la rellevància i edat de les experiències passades. Els resultats experimentals demostren que aquesta proposta adapta millor els perfils als usuaris i soluciona el problema de la utilitat que pateixen el sistemes basats en CBR. Els sistemes recomanadors milloren espectacularment la qualitat dels resultats quan informació sobre els altres usuaris és utilitzada quan es recomana a un usuari concret. Aquesta tesi proposa l'agentificació dels sistemes recomanadors per tal de treure profit de propietats interessants dels agents com ara la proactivitat, la encapsulació o l'habilitat social. La col·laboració entre agents es realitza a partir del mètode de filtratge basat en la opinió i del mètode col·laboratiu de filtratge a partir de confiança. Els dos mètodes es basen en un model social de confiança que fa que els agents siguin menys vulnerables als altres quan col·laboren. Els resultats experimentals demostren que els agents recomanadors col·laboratius proposats milloren el rendiment del sistema mentre que preserven la privacitat de les dades personals de l'usuari. Finalment, aquesta tesi també proposa un procediment per avaluar sistemes recomanadors que permet la discussió científica dels resultats. Aquesta proposta simula el comportament dels usuaris al llarg del temps basat en perfils d'usuari reals. Esperem que aquesta metodologia d'avaluació contribueixi al progrés d'aquesta àrea de recerca.
Resumo:
The work on Social Memory, focused on the biographic method and the paths of immaterial Heritage, are the fabric that we have chosen to substantiate the idea of museum. The social dimensions of memory, its construction and representation, are the thickness of the exhibition fabric. The specificity of museological work in contemporary times resembles a fine lace, a meticulous weaving of threads that flow from time, admirable lace, painstaking and complex, created with many needles, made up of hollow spots and stitches (of memories and things forgotten). Repetitions and symmetries are the pace that perpetuates it, the rhythmic grammar that gives it body. A fluid body, a single piece, circumstantial. It is always possible to create new patterns, new compositions, with the same threads. Accurately made, properly made, this lace of memories and things forgotten is always an extraordinary creation, a web of wonder that expands fantasy, generates value and feeds the endless reserve of the community’s knowledge, values and beliefs.
Resumo:
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.
Resumo:
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
The sequential analysis of repeated binary responses: a score test for the case of three time points
Resumo:
In this paper a robust method is developed for the analysis of data consisting of repeated binary observations taken at up to three fixed time points on each subject. The primary objective is to compare outcomes at the last time point, using earlier observations to predict this for subjects with incomplete records. A score test is derived. The method is developed for application to sequential clinical trials, as at interim analyses there will be many incomplete records occurring in non-informative patterns. Motivation for the methodology comes from experience with clinical trials in stroke and head injury, and data from one such trial is used to illustrate the approach. Extensions to more than three time points and to allow for stratification are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Hydroponic isotope labelling of entire plants (HILEP) is a cost-effective method enabling metabolic labelling of whole and mature plants with a stable isotope such as N-15. By utilising hydroponic media that contain N-15 inorganic salts as the sole nitrogen source, near to 100% N-15-labelling of proteins can be achieved. In this study, it is shown that HILEP, in combination with mass spectrometry, is suitable for relative protein quantitation of seven week-old Arabidopsis plants submitted to oxidative stress. Protein extracts from pooled N-14- and N-15-hydroponically grown plants were fractionated by SDS-PAGE, digested and analysed by liquid chromatography electrospray ionisation tandem mass spectrometry (LC-ESI-MS/MS). Proteins were identified and the spectra of N-14/N-15 peptide pairs were extracted using their m/z chromatographic retention time, isotopic distributions, and the m/z difference between the N-14 and N-15 peptides. Relative amounts were calculated as the ratio of the sum of the peak areas of the two distinct N-14 and N-15 peptide isotope envelopes. Using Mascot and the open source trans-proteomic pipeline (TPP), the data processing was automated for global proteome quantitation down to the isoform level by extracting isoform specific peptides. With this combination of metabolic labelling and mass spectrometry it was possible to show differential protein expression in the apoplast of plants submitted to oxidative stress. Moreover, it was possible to discriminate between differentially expressed isoforms belonging to the same protein family, such as isoforms of xylanases and pathogen-related glucanases (PR 2). (C) 2008 Elsevier Ltd. All rights reserved.