935 resultados para Implementation Model
Resumo:
Currently, many museums, botanic gardens and herbariums keep data of biological collections and using computational tools researchers digitalize and provide access to their data using data portals. The replication of databases in portals can be accomplished through the use of protocols and data schema. However, the implementation of this solution demands a large amount of time, concerning both the transfer of fragments of data and processing data within the portal. With the growth of data digitalization in institutions, this scenario tends to be increasingly exacerbated, making it hard to maintain the records updated on the portals. As an original contribution, this research proposes analysing the data replication process to evaluate the performance of portals. The Inter-American Biodiversity Information Network (IABIN) biodiversity data portal of pollinators was used as a study case, which supports both situations: conventional data replication of records of specimen occurrences and interactions between them. With the results of this research, it is possible to simulate a situation before its implementation, thus predicting the performance of replication operations. Additionally, these results may contribute to future improvements to this process, in order to decrease the time required to make the data available in portals. © Rinton Press.
Resumo:
Este artigo apresenta um modelo para o fomento da autonomia dos aprendentes, mostra alguns resultados alcançados na aplicação desse modelo e discute desafios ainda a serem enfrentados. O modelo comporta a investigação de áreas problemáticas do processo individual de aprendizagem de cada sujeito, a identificação de seus estilos preferenciais de aprender, o uso de ferramentas tecnológicas para melhorar a autonomia na aprendizagem, o desenvolvimento de um leque maior de estratégias de aprendizagem de línguas e a implementação de rotinas de auto-monitoramento e auto-avaliação. Este modelo tem sido aplicado nos últimos três anos com alunos de Letras cursando Licenciaturas em Alemão, Francês ou Inglês. Três ordens de resultados emergem dos dados da pesquisa: primeiramente, o modelo provou sua eficácia em prover um andaime para a aprendizagem autônoma de línguas dos alunos; em segundo lugar, as experiências de aprendizagem autônoma vividas pelos futuros professores de línguas poderão ser espelhadas em suas vidas profissionais futuras com seus próprios alunos; finalmente, os dados emanados dos participantes da pesquisa podem lançar uma luz sobre a variedade de maneiras pelas quais as pessoas aprendem no contexto local.
Resumo:
Valency is an inherent property of nominalizations representing higher-order entities, and as such it should be included in their underlying representation. On the basis of this assumption, I postulate that cases of non-overt arguments, which are very common in Brazilian Portuguese and in many other languages of the world, should be considered a special type of valency realization. This paper aims to give empirical support to this postulate by showing that non-overt arguments are both semantically and pragmatically motivated. The semantic and pragmatic motivations for non-overt arguments may be accounted for by the dynamic implementation of the FDG model. I argue that the way valency is realized by means of non-overt arguments suggests a strong parallelism between nominalizations and other types of non-finite embedded constructions – like infinitival and participial ones. By providing empirical evidence for this parallelism I arrive at the conclusion that there are at least three kinds of non-finite embedded constructions, rather than only two, as suggested by Dik (1997).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Artificial neural networks (ANNs) have been widely applied to the resolution of complex biological problems. An important feature of neural models is that their implementation is not precluded by the theoretical distribution shape of the data used. Frequently, the performance of ANNs over linear or non-linear regression-based statistical methods is deemed to be significantly superior if suitable sample sizes are provided, especially in multidimensional and non-linear processes. The current work was aimed at utilising three well-known neural network methods in order to evaluate whether these models would be able to provide more accurate outcomes in relation to a conventional regression method in pupal weight predictions of Chrysomya megacephala, a species of blowfly (Diptera: Calliphoridae), using larval density (i.e. the initial number of larvae), amount of available food and pupal size as input data. It was possible to notice that the neural networks yielded more accurate performances in comparison with the statistical model (multiple regression). Assessing the three types of networks utilised (Multi-layer Perceptron, Radial Basis Function and Generalised Regression Neural Network), no considerable differences between these models were detected. The superiority of these neural models over a classical statistical method represents an important fact, because more accurate models may clarify several intricate aspects concerning the nutritional ecology of blowflies.
Resumo:
Given the similar interests of United Way organizations and universities in planning, implementation, and evaluation of human services, the two social institutions could be extensively and effectively partnering with one another. However, there is little documentation that such cooperative efforts are taking place. This article describes one such collaboration in Lincoln, Nebraska. The purpose of the article is to show the potential of such collaboration to improve community-wide coordination and outcomes by following the principles of a community-engagement model, to generate more effective use of evaluative tools that can assist in developing evidence-based practices in community planning, and to connect areas of study within the university to United Way efforts.
Resumo:
The adoption of principles of equality and universality stipulated in legislation for the sanitation sector requires discussions on innovation. The existing model was able to meet sanitary demands, but was unable to attend all areas causing disparities in vulnerable areas. The universal implementation of sanitation requires identification of the know-how that promotes it and analysis of the model adopted today to establish a new method. Analysis of how different viewpoints on the restructuring process is necessary for the definition of public policy, especially in health, and understanding its complexities and importance in confirming social practices and organizational designs. These are discussed to contribute to universal implementation of sanitation in urban areas by means of a review of the literature and practices in the industry. By way of conclusion, it is considered that accepting a particular concept or idea in sanitation means choosing some effective interventions in the network and on the lives of individual users, and implies a redefinition of the space in which it exercises control and management of sewerage networks, such that connected users are perceived as groups with different interests.
Resumo:
Objective: This study aims to address difficulties reported by the nursing team during the process of changing the management model in a public hospital in Brazil. Methods: This qualitative study used thematic content analysis as proposed by Bardin, and data were analyzed using the theoretical framework of Bolman and Deal. Results: The vertical implementation of Participatory Management contradicted its underlying philosophy and thereby negatively influenced employee acceptance of the change. The decentralized structure of the Participatory Management Model was implemented but shared decision-making was only partially utilized. Despite facilitation of the communication process within the unit, more significant difficulties arose from lack of communication inter-unit. Values and principals need to be shared by teams, however, that will happens only if managers restructure accountabilities changing job descriptions of all team members. Conclusion: Innovative management models that depart from the premise of decentralized decision-making and increased communication encourage accountability, increased motivation and satisfaction, and contribute to improving the quality of care. The contribution of the study is that it describes the complexity of implementing an innovative management model, examines dissent and intentionally acknowledges the difficulties faced by employees in the organization.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
Companies are currently choosing to integrate logics and systems to achieve better solutions. These combinations also include companies striving to join the logic of material requirement planning (MRP) system with the systems of lean production. The purpose of this article was to design an MRP as part of the implementation of an enterprise resource planning (ERP) in a company that produces agricultural implements, which has used the lean production system since 1998. This proposal is based on the innovation theory, theory networks, lean production systems, ERP systems and the hybrid production systems, which use both components and MRP systems, as concepts of lean production systems. The analytical approach of innovation networks enables verification of the links and relationships among the companies and departments of the same corporation. The analysis begins with the MRP implementation project carried out in a Brazilian metallurgical company and follows through the operationalisation of the MRP project, until its production stabilisation. The main point is that the MRP system should help the company's operations with regard to its effective agility to respond in time to demand fluctuations, facilitating the creation process and controlling the branch offices in other countries that use components produced in the matrix, hence ensuring more accurate estimates of stockpiles. Consequently, it presents the enterprise knowledge development organisational modelling methodology in order to represent further models (goals, actors and resources, business rules, business process and concepts) that should be included in this MRP implementation process for the new configuration of the production system.
Resumo:
We present a generalized test case generation method, called the G method. Although inspired by the W method, the G method, in contrast, allows for test case suite generation even in the absence of characterization sets for the specification models. Instead, the G method relies on knowledge about the index of certain equivalences induced at the implementation models. We show that the W method can be derived from the G method as a particular case. Moreover, we discuss some naturally occurring infinite classes of FSM models over which the G method generates test suites that are exponentially more compact than those produced by the W method.
Resumo:
Abstract Background The criteria for organ sharing has developed a system that prioritizes liver transplantation (LT) for patients with hepatocellular carcinoma (HCC) who have the highest risk of wait-list mortality. In some countries this model allows patients only within the Milan Criteria (MC, defined by the presence of a single nodule up to 5 cm, up to three nodules none larger than 3 cm, with no evidence of extrahepatic spread or macrovascular invasion) to be evaluated for liver transplantation. This police implies that some patients with HCC slightly more advanced than those allowed by the current strict selection criteria will be excluded, even though LT for these patients might be associated with acceptable long-term outcomes. Methods We propose a mathematical approach to study the consequences of relaxing the MC for patients with HCC that do not comply with the current rules for inclusion in the transplantation candidate list. We consider overall 5-years survival rates compatible with the ones reported in the literature. We calculate the best strategy that would minimize the total mortality of the affected population, that is, the total number of people in both groups of HCC patients that die after 5 years of the implementation of the strategy, either by post-transplantation death or by death due to the basic HCC. We illustrate the above analysis with a simulation of a theoretical population of 1,500 HCC patients with tumor size exponentially. The parameter λ obtained from the literature was equal to 0.3. As the total number of patients in these real samples was 327 patients, this implied in an average size of 3.3 cm and a 95% confidence interval of [2.9; 3.7]. The total number of available livers to be grafted was assumed to be 500. Results With 1500 patients in the waiting list and 500 grafts available we simulated the total number of deaths in both transplanted and non-transplanted HCC patients after 5 years as a function of the tumor size of transplanted patients. The total number of deaths drops down monotonically with tumor size, reaching a minimum at size equals to 7 cm, increasing from thereafter. With tumor size equals to 10 cm the total mortality is equal to the 5 cm threshold of the Milan criteria. Conclusion We concluded that it is possible to include patients with tumor size up to 10 cm without increasing the total mortality of this population.
Resumo:
Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.
Resumo:
Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)
Resumo:
[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.