838 resultados para Multiple methods framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this project is to understand, under a social constructionist approach, what are the meanings that external facilitators and organizational members (sponsors) working with dialogic methods place on themselves and their work. Dialogic methods, with the objective of engaging groups in flows of conversations to envisage and co-create their own future, are growing fast within organizations as a means to achieve collective change. Sharing constructionist ideas about the possibility of multiple realities and language as constitutive of such realities, dialogue has turned into a promising way for transformation, especially in a macro context of constant change and increasing complexity, where traditional structures, relationships and forms of work are questioned. Research on the topic has mostly focused on specific methods or applications, with few attempts to study it in a broader sense. Also, despite the fact that dialogic methods work on the assumption that realities are socially constructed, few studies approach the topic from a social constructionist perspective, as a research methodology per se. Thus, while most existing research aims at explaining whether or how particular methods meet particular results, my intention is to explore the meanings sustaining these new forms of organizational practice. Data was collected through semi-structured interviews with 25 people working with dialogic methods: 11 facilitators and 14 sponsors, from 8 different organizations in Brazil. Firstly, the research findings indicate several contextual elements that seem to sustain the choices for dialogic methods. Within this context, there does not seem to be a clear or specific demand for dialogic methods, but a set of different motivations, objectives and focuses, bringing about several contrasts in the way participants name, describe and explain their experiences with such methods, including tensions on power relations, knowledge creation, identity and communication. Secondly, some central ideas or images were identified within such contrasts, pointing at both directions: dialogic methods as opportunities for the creation of new organizational realities (with images of a ‘door’ or a ‘flow’, for instance, which suggest that dialogic methods may open up the access to other perspectives and the creation of new realities); and dialogic methods as new instrumental mechanisms that seem to reproduce the traditional and non-dialogical forms of work and relationship. The individualistic tradition and its tendency for rational schematism - pointed out by social constructionist scholars as strong traditions in our Western Culture - could be observed in some participants’ accounts with the image of dialogic methods as a ‘gym’, for instance, in which dialogical – and idealized –‘abilities’ could be taught and trained, turning dialogue into a tool, rather than a means for transformation. As a conclusion, I discuss what the implications of such taken-for-granted assumptions may be, and offer some insights into dialogue (and dialogic methods) as ‘the art of being together’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent progress in the technology for single unit recordings has given the neuroscientific community theopportunity to record the spiking activity of large neuronal populations. At the same pace, statistical andmathematical tools were developed to deal with high-dimensional datasets typical of such recordings.A major line of research investigates the functional role of subsets of neurons with significant co-firingbehavior: the Hebbian cell assemblies. Here we review three linear methods for the detection of cellassemblies in large neuronal populations that rely on principal and independent component analysis.Based on their performance in spike train simulations, we propose a modified framework that incorpo-rates multiple features of these previous methods. We apply the new framework to actual single unitrecordings and show the existence of cell assemblies in the rat hippocampus, which typically oscillate attheta frequencies and couple to different phases of the underlying field rhythm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O método de combinação de Nelson-Oppen permite que vários procedimentos de decisão, cada um projetado para uma teoria específica, possam ser combinados para inferir sobre teorias mais abrangentes, através do princípio de propagação de igualdades. Provadores de teorema baseados neste modelo são beneficiados por sua característica modular e podem evoluir mais facilmente, incrementalmente. Difference logic é uma subteoria da aritmética linear. Ela é formada por constraints do tipo x − y ≤ c, onde x e y são variáveis e c é uma constante. Difference logic é muito comum em vários problemas, como circuitos digitais, agendamento, sistemas temporais, etc. e se apresenta predominante em vários outros casos. Difference logic ainda se caracteriza por ser modelada usando teoria dos grafos. Isto permite que vários algoritmos eficientes e conhecidos da teoria de grafos possam ser utilizados. Um procedimento de decisão para difference logic é capaz de induzir sobre milhares de constraints. Um procedimento de decisão para a teoria de difference logic tem como objetivo principal informar se um conjunto de constraints de difference logic é satisfatível (as variáveis podem assumir valores que tornam o conjunto consistente) ou não. Além disso, para funcionar em um modelo de combinação baseado em Nelson-Oppen, o procedimento de decisão precisa ter outras funcionalidades, como geração de igualdade de variáveis, prova de inconsistência, premissas, etc. Este trabalho apresenta um procedimento de decisão para a teoria de difference logic dentro de uma arquitetura baseada no método de combinação de Nelson-Oppen. O trabalho foi realizado integrando-se ao provador haRVey, de onde foi possível observar o seu funcionamento. Detalhes de implementação e testes experimentais são relatados

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. To test the hypothesis that multiple firing and silica deposition on the zirconia surface influence the bond strength to porcelain.Materials and methods. Specimens were cut from yttria-stabilized zirconia blocks and sintered. Half of the specimens (group S) were silica coated (physical vapor deposition (PVD)) via reactive magnetron sputtering before porcelain veneering. The remaining specimens (group N) had no treatment before veneering. The contact angle before and after silica deposition was measured. Porcelain was applied on all specimens and submitted to two (N2 and S2) or three firing cycles (N3 and S3). The resulting porcelain-zirconia blocks were sectioned to obtain bar-shaped specimens with 1 mm(2) of cross-sectional area. Specimens were attached to a universal testing machine and tested in tension until fracture. Fractured surfaces were examined using optical microscopy. Data were statistically analyzed using two-way ANOVA, Tukey's test (alpha = 0.05) and Weibull analysis.Results. Specimens submitted to three firing cycles (N3 and S3) showed higher mean bond strength values than specimens fired twice (N2 and S2). Mean contact angle was lower for specimens with silica layer, but it had no effect on bond strength. Most fractures initiated at porcelain-zirconia interface and propagated through the porcelain.Significance. The molecular deposition of silica on the zirconia surface had no influence on bond strength to porcelain, while the number of porcelain firing cycles significantly affected the bond strength of the ceramic system, partially accepting the study hypothesis. Yet, the Weibull modulus values of S groups were significantly greater than the m values of N groups. (C) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scattering of positronium (Ps) from atoms (H, He, Ne, Ar), molecule (H(2)) and ion (He(+)) have been investigated using a coupled-channel (CC) formalism with a regularised non-local exchange potential. The advantage of using such a regularized exchange potential in the close-coupling formalism and the normalizability aspect of the solution at low energies with a minimum effective coupling are discussed. Results for the elastic and total scattering cross-sections, resonance and binding energies in Ps-H, and pick-off annihilation results in Ps-He are found to be in excellent agreement with measurements and variational predictions. (C) 2000 Elsevier B.V. B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We suggest a pseudospectral method for solving the three-dimensional time-dependent Gross-Pitaevskii (GP) equation, and use it to study the resonance dynamics of a trapped Bose-Einstein condensate induced by a periodic variation in the atomic scattering length. When the frequency of oscillation of the scattering length is an even multiple of one of the trapping frequencies along the x, y or z direction, the corresponding size of the condensate executes resonant oscillation. Using the concept of the differentiation matrix, the partial-differential GP equation is reduced to a set of coupled ordinary differential equations, which is solved by a fourth-order adaptive step-size control Runge-Kutta method. The pseudospectral method is contrasted with the finite-difference method for the same problem, where the time evolution is performed by the Crank-Nicholson algorithm. The latter method is illustrated to be more suitable for a three-dimensional standing-wave optical-lattice trapping potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statement of problem. Highly polished enamel surfaces arc recommended for axial tooth surfaces that will serve as guiding planes and be contacted by component parts of a removable partial denture. There is little evidence to support the assumption that this tooth modification will provide accurate adaptation of the framework and prevent build-up of plaque.Purpose. The aim of this investigation was to evaluate the surface roughness of the tooth enamel, prepared to serve as guiding planes, with different polishing systems.Material and methods. Four different methods (designated A, B, C, and D) for finishing and polishing the prepared enamel surfaces of 20 freshly extracted third molar teeth were studied. Each method involved 3, 4, or 5 different steps. The roughness of each specimen was measured at the start of each method before recontouring, after recontouring, and after each step of the 4 finishing and polishing procedures. The 4 experimental finishing methods were applied after recontouring the axial surfaces (buccal, lingual, and proxinial) of each tooth. Thus the 20 teeth (60 surfaces) were finished and polished by use of 1 of the experimental methods. Surface roughness was measured with a profilometer (mum); the readings of the unpolished enamel Surfaces were recorded as control measurements. Results were statistically analyzed with one-way analysis of variance followed by Tukey's test at the 95% level of confidence.Results. The highest roughness mean values (14.41 mum to 16.44 mum) were found when the diamond bur was used at a high speed for tooth preparation. A significant decrease in roughness values was observed with the diamond bur at a low speed (P<.05). Analysis of the roughness values revealed that all polishing methods produced surface roughness similar to that of the corresponding control teeth.Conclusion. Within the limitations of this study, all finishing procedures tested effectively promoted an enamel surface similar to the original unpolished enamel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Statistical methods of multiple regression analysis, trend surface analysis and principal components analysis were applied to seismographic data recorded during production blasting at a diabase quarry in the urban area of Campinas (SP), Brazil. The purpose of these analyses was to determine the influence of the following variables: distance (D), charge weight per delay (W), and scaled distance (SD) associated with properties of the rock body (orientation, frequency and angle of geological discontinuities; depth of bedrock and thickness of the soil overburden) in the variation of the peak particle velocity (PPV). This approach yielded variables with larger influences (loads) on the variation of ground vibration, as well as behavior and space tendency of this variation. The results showed a better relationship between PPV and D, with D being the most important factor in the attenuation of the ground vibrations. The geological joints and the depth to bedrock have a larger influence than the explosive charges in the variation of the vibration levels, but frequencies appear to be more influenced by the amount of soil overburden.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species richness is central to ecological theory, with practical applications in conservation, environmental management and monitoring. Several techniques are available for measuring species richness and composition of amphibians in breeding pools, but the relative efficacy of these methods for sampling high-diversity Neotropical amphibian fauna is poorly understood. I evaluated seven studies from south and south-eastern Brazil to compare the relative and combined effectiveness of two methods for measuring species richness at anuran breeding pools: acoustic surveys with visual encounter of adults and dipnet surveys of larvae. I also compared the relative efficacy of each survey method in detecting species with different reproductive modes. Results showed that both survey methods underestimated the number of species when used separately; however, a close approximation of the actual number of species in each breeding pool was obtained when the methods were combined. There was no difference between survey methods in detecting species with different reproductive modes. These results indicate that researchers should employ multiple survey methods that target both adult and larval life history stages in order to accurately assess anuran species richness at breeding pools in the Neotropics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sedimentary Curitiba basin is located in the Central-Southern part of the first Parananense plateau, and comprises Curitiba (PR), and part of the neighbour Municipalities (fig.1). It is supposed to be of Plio-Pleistocene age. It has a shallow sedimentary fulfillment, represented by the Guabirotuba formation (BIGARELLA and SALAMUNI, 1962) which is dristributed over a large area of about 3.000km2. The internal geometry, not entirely known yet, is actually object of detailed research, that shows its geological evolution to Cenozoic tectonic movements. For the purpose of this study the definition of the structural contour of the basement and their depo-centers is fundamental. This paper presents the results of the integration of surface and subsurface data, processed by statistical methods, which allowed a more precise definition of the morphostructural framework of the basement. For the analysis of the geological spacial data, specific softwares were used for statistical processing for trend surfaces analysis. The data used in this study are of following types: a) drilling logs for ground water; b) description of surface points of geological maps (CRPM, 1977); c) description of points of geotechnical drillings and down geological survey. The data of 223 drilling logs for ground water were selected out of 770 wells. The description files of 700 outcrops, as well as planialtimetric field data, were used for the localization of the basement outcrop. Thus, a matrix with five columns was set up: utm E-W (x) and utm N-S (y); surface altitude (z); altimetric cote of the contact between sedimentary rocks and the basement (k); isopachs (l). For the study of the basement limits, the analysis of surface trends of 2(nd) and 3(rd) degree polinomial for the altimetric data (figs. 2 and 3) were used. For the residuals the method of the inverse of the square of the distance (fig.4) was used. The adjustments and the explanations of the surfaces were made with the aid of multiple linear regressions. The analysis of 3rd degree polinomial trend surface (fig.3) confirmed that the basement tends to be more exposed towards NNW-SSE explaining better the data trend through an ellipse, which striking NE-SW and dipping SW axis coincides with the trough of the basin observed in the trending surface of the basement. The performed analysis and the respective images offer a good degree of certainty of the geometric model of the Curitiba Basin and of the morphostructure of its basement. The surface trend allows to sketch with a greater degree of confidence the structural contour of the topgraphic surface (figs. 5 and 6) and of the basement (figs. 7 and 8), as well as the delimitation of intermediate structural heights, which were responsible for isolated and assymmetric depocenters. These details are shown in the map of figures 9 and 10. Thus, the Curitiba Basin is made up by a structural trough stretching NE-SW, with maximum preserved depths of about 80m, which are separated by heights and depocenters striking NW-SE (fig. 11). These structural features seems to have been controlled by tectonic reactivation during the Tertiary (HASUI, 1990) and which younger dissection was conditioned by neotectonic processes (SALAMUNI and EBERT, 1994).