881 resultados para Interaction modeling. Model-based development. Interaction evaluation.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation of ’software factory’ paradigm. The model allows for automation via software tools implementing the concepts.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
The evaluation of forecast performance plays a central role both in the interpretation and use of forecast systems and in their development. Different evaluation measures (scores) are available, often quantifying different characteristics of forecast performance. The properties of several proper scores for probabilistic forecast evaluation are contrasted and then used to interpret decadal probability hindcasts of global mean temperature. The Continuous Ranked Probability Score (CRPS), Proper Linear (PL) score, and IJ Good’s logarithmic score (also referred to as Ignorance) are compared; although information from all three may be useful, the logarithmic score has an immediate interpretation and is not insensitive to forecast busts. Neither CRPS nor PL is local; this is shown to produce counter intuitive evaluations by CRPS. Benchmark forecasts from empirical models like Dynamic Climatology place the scores in context. Comparing scores for forecast systems based on physical models (in this case HadCM3, from the CMIP5 decadal archive) against such benchmarks is more informative than internal comparison systems based on similar physical simulation models with each other. It is shown that a forecast system based on HadCM3 out performs Dynamic Climatology in decadal global mean temperature hindcasts; Dynamic Climatology previously outperformed a forecast system based upon HadGEM2 and reasons for these results are suggested. Forecasts of aggregate data (5-year means of global mean temperature) are, of course, narrower than forecasts of annual averages due to the suppression of variance; while the average “distance” between the forecasts and a target may be expected to decrease, little if any discernible improvement in probabilistic skill is achieved.
Resumo:
How tropical cyclone (TC) activity in the northwestern Pacific might change in a future climate is assessed using multidecadal Atmospheric Model Intercomparison Project (AMIP)-style and time-slice simulations with the ECMWF Integrated Forecast System (IFS) at 16-km and 125-km global resolution. Both models reproduce many aspects of the present-day TC climatology and variability well, although the 16-km IFS is far more skillful in simulating the full intensity distribution and genesis locations, including their changes in response to El Niño–Southern Oscillation. Both IFS models project a small change in TC frequency at the end of the twenty-first century related to distinct shifts in genesis locations. In the 16-km IFS, this shift is southward and is likely driven by the southeastward penetration of the monsoon trough/subtropical high circulation system and the southward shift in activity of the synoptic-scale tropical disturbances in response to the strengthening of deep convective activity over the central equatorial Pacific in a future climate. The 16-km IFS also projects about a 50% increase in the power dissipation index, mainly due to significant increases in the frequency of the more intense storms, which is comparable to the natural variability in the model. Based on composite analysis of large samples of supertyphoons, both the development rate and the peak intensities of these storms increase in a future climate, which is consistent with their tendency to develop more to the south, within an environment that is thermodynamically more favorable for faster development and higher intensities. Coherent changes in the vertical structure of supertyphoon composites show system-scale amplification of the primary and secondary circulations with signs of contraction, a deeper warm core, and an upward shift in the outflow layer and the frequency of the most intense updrafts. Considering the large differences in the projections of TC intensity change between the 16-km and 125-km IFS, this study further emphasizes the need for high-resolution modeling in assessing potential changes in TC activity.
Resumo:
FS CMa type stars are a recently described group of objects with the B[e] phenomenon which exhibits strong emission-line spectra and strong IR excesses. In this paper, we report the first attempt for a detailed modeling of IRAS 00470+6429, for which we have the best set of observations. Our modeling is based on two key assumptions: the star has a main-sequence luminosity for its spectral type (B2) and the circumstellar (CS) envelope is bimodal, composed of a slowly outflowing disklike wind and a fast polar wind. Both outflows are assumed to be purely radial. We adopt a novel approach to describe the dust formation site in the wind that employs timescale arguments for grain condensation and a self-consistent solution for the dust destruction surface. With the above assumptions we were able to satisfactorily reproduce many observational properties of IRAS 00470+6429, including the Hi line profiles and the overall shape of the spectral energy distribution. Our adopted recipe for dust formation proved successful in reproducing the correct amount of dust formed in the CS envelope. Possible shortcomings of our model, as well as suggestions for future improvements, are discussed.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The adsorption behavior of several amphiphilic polyelectrolytes of poly(maleic anhydride-alt-styrene) functionalized with naphthyl and phenyl groups, onto amino-terminated silicon wafer has been studied by means of null- ellipsometry, atomic force microscopy (AFM) and contact angle measurements. The maximum of adsorption, Gamma(plateau), varies with the ionic strength, the polyelectrolyte structure and the chain length. Values of Gamma(plateau) obtained at low and high ionic strengths indicate that the adsorption follows the ""screening-reduced adsorption"" regime. Large aggregates were detected in solution by means of dynamic light scattering and fluorescence measurements. However. AFM indicated the formation of smooth layers and the absence of aggregates. A model based on a two-step adsorption behavior was proposed. In the first one, isolated chains in equilibrium with the aggregates in solution adsorbed onto amino-terminated surface. The adsorption is driven by electrostatic interaction between protonated surface and carboxylate groups. This first layer exposes naphtyl or phenyl groups to the solution. The second layer adsorption is now driven by hydrophobic interaction between surface and chains and exposes carboxylate groups to the medium, which repel the forthcoming chain by electrostatic repulsion. Upon drying some hydrophobic naphtyl or phenyl groups might be oriented to the air, as revealed by contact angle measurements. Such amphiphilic polyelectrolyte layers worked well for the building-up of multilayers with chitosan. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A dynamic atmosphere generator with a naphthalene emission source has been constructed and used for the development and evaluation of a bioluminescence sensor based on the bacteria Pseudomonas fluorescens HK44 immobilized in 2% agar gel (101 cell mL(-1)) placed in sampling tubes. A steady naphthalene emission rate (around 7.3 nmol min(-1) at 27 degrees C and 7.4 mLmin(-1) of purified air) was obtained by covering the diffusion unit containing solid naphthalene with a PTFE filter membrane. The time elapsed from gelation of the agar matrix to analyte exposure (""maturation time"") was found relevant for the bioluminescence assays, being most favorable between 1.5 and 3 h. The maximum light emission, observed after 80 min, is dependent on the analyte concentration and the exposure time (evaluated between 5 and 20 min), but not on the flow rate of naphthalene in the sampling tube, over the range of 1.8-7.4 nmol min(-1). A good linear response was obtained between 50 and 260 nmol L-1 with a limit of detection estimated in 20 nmol L-1 far below the recommended threshold limit value for naphthalene in air. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.
Resumo:
Microwave remote sensing has high potential for soil moisture retrieval. However, the efficient retrieval of soil moisture depends on optimally choosing the soil moisture retrieval parameters. In this study first the initial evaluation of SMOS L2 product is performed and then four approaches regarding soil moisture retrieval from SMOS brightness temperature are reported. The radiative transfer equation based tau-omega rationale is used in this study for the soil moisture retrievals. The single channel algorithms (SCA) using H polarisation is implemented with modifications, which includes the effective temperatures simulated from ECMWF (downscaled using WRF-NOAH Land Surface Model (LSM)) and MODIS. The retrieved soil moisture is then utilized for soil moisture deficit (SMD) estimation using empirical relationships with Probability Distributed Model based SMD as a benchmark. The square of correlation during the calibration indicates a value of R2 =0.359 for approach 4 (WRF-NOAH LSM based LST with optimized roughness parameters) followed by the approach 2 (optimized roughness parameters and MODIS based LST) (R2 =0.293), approach 3 (WRF-NOAH LSM based LST with no optimization) (R2 =0.267) and approach 1(MODIS based LST with no optimization) (R2 =0.163). Similarly, during the validation a highest performance is reported by approach 4. The other approaches are also following a similar trend as calibration. All the performances are depicted through Taylor diagram which indicates that the H polarisation using ECMWF based LST is giving a better performance for SMD estimation than the original SMOS L2 products at a catchment scale.
Resumo:
The aim of this dissertation is to demonstrate what happens to the public administration in the state of Paraná through a case study, more specifically, in two organization: one, called in specific legal regime, 'direct administration' and the other, 'indirect administration', by means of structured interviews searching the distance between the discourse and practice which concerns to what was developed in the training area inside a human resource policy of the state. Since two decades, the training function in public administration of the state is changing and suffering some internal (re)structures. These changes are due to the pressure generated by either the natural requirement of changing in the same area or in the ways and aims from governmental spheres (state and federal). On one hand, this study analyzes the performance of training function during from 1987 to 1994, in order to verify the outcome factors of the not structured area and the coherency between programmed and accomplished actions. On the other hand, compare the discourse and practice based on a human resource policy implemented and adopted by the government. The results of field research with bibliographic examination allow to conclude that although the official and formal documents delineate a human resource policy to the state, there were evident contradictions between the proposal and what the state really fulfilled. The qualitative data analysis concluded that the majority of the actions are implemented casuisticaly. During the case study period, the human resource area specifically, training and development, suffered constant (re) tructures. The consequence was ¿ the both institutions, responsible for the training area, lost time and financial resources. Legal changes, internal dispute for institutional space, lack of tune and synchrony resulted once more in a discontinued action in the area. However it is perceptible that the government is worried about the development and evaluation of its civil services although it goes on behaving without a structured and integrated planning related to any human resource system. The study, therefore, confirms that the formulation and implementation of effective human resource policy, either through an analytic model or not, must be centralized in integrated action interrelated to all the subsystems of the human resource area, neither in a disguised way nor linked to the discourse of a law or government projects.
Resumo:
Este trabalho visa obter e verificar empiricamente um meta-modelo que possa apoiar e aprofundar a compreensão do fenômeno da resistência a sistemas de informação. Tratase de uma pesquisa explanatória e quantitativa na qual faz-se, por meio de uma extensa revisão da literatura mundial, o levantamento e consolidação das principais teorias e modelos existentes sobre o tema. Dessa forma, buscando obter um melhor entendimento do problema de pesquisa, propõe-se um meta-modelo de fatores pertinentes ao comportamento de resistência a sistemas de informação. Neste modelo, considera-se um conjunto de aspectos que, embora já abordados anteriormente, em sua maior parte ainda não haviam sido testados empiricamente, quais sejam: (i) as características idiossincráticas dos indivíduos, (ii) os aspectos técnicos inerentes aos sistemas de informação, (iii) as características da interação sócio-técnica, (iv) as características da interação de poder e políticas e, finalmente, (v) as características das organizações nas quais a tecnologia e o homem estão inseridos e interagem entre si. O instrumento de pesquisa utilizado no trabalho foi um questionário estruturado, aplicado via Internet, com suas questões contextualizadas quanto aos sistemas de gestão empresarial ERPs: Enterprise Resource Planning Systems. Obteve-se um total de 169 respondentes, considerando-se uma amostra composta exclusivamente por gestores de tecnologia da informação (TI) brasileiros e que tenham vivenciado pelo menos uma experiência de implantação de sistemas ERP ao longo de suas carreiras. Uma vez realizada a coleta dos dados, foram empregados testes estatísticos relativos à análise fatorial, visando alcançar um modelo definitivo. A partir do novo modelo encontrado, por meio da validação proporcionada pela análise fatorial, cada fator identificado representou uma causa para o comportamento de resistência a sistemas de informação. Por fim, testou-se também hipóteses a partir do novo modelo identificado, verificando-se as relações entre a percepção direta dos gestores quanto à resistência e os diversos fatores considerados relevantes para a explicação deste comportamento. Como resultado do estudo, consolidou-se um modelo de análise do comportamento de resistência a sistemas de informação, baseado na percepção do gestor de TI e contextualizado nos sistemas ERPs.
Resumo:
A educação a distância tem tido expressivo crescimento no Brasil nos últimos anos e com essa expansão aumentam também os desafios relacionados à sua gestão. Este trabalho focaliza as diversas concepções e fundamentos teóricos acerca das abordagens sobre estrutura organizacional e sistemas de educação a distância, para concepção do que denominou-se configuração da gestão da EAD. Como delineamento de pesquisa analisou-se a relação entre as configurações das gestões dos cursos a distância de Administração, projeto piloto da Universidade Aberta do Brasil e os seus conceitos definidos pelo Exame Nacional de Desempenho de Estudantes, identificando-se as teorias e modelos manifestos em seu contexto. O trabalho parte de uma descrição da estrutura organizacional da área responsável pela educação a distância na Universidade Estadual do Maranhão, na Universidade Estadual da Paraíba e na Universidade Federal do Ceará, realizando uma análise comparativa das dimensões complexidade, centralização e coordenação nessas três instituições. Também foram analisados e comparados os sistemas de EAD do curso piloto da UAB, a partir dos referenciais de qualidade para educação superior a distância do MEC, especificamente concentrados nos componentes interdisciplinaridade, materiais didáticos, avaliação, equipe multidisciplinar, comunicação e infraestrutura de polos. Com base nessas análises, discute-se os diferentes conceitos atribuídos pelo Enade aos referidos cursos e explicam-se as correspondências das estruturas organizacionais e dos sistemas de EAD descritos, em relação a esses resultados. Trata-se de uma pesquisa predominantemente qualitativa, descritiva, explicativa e multicaso, cuja coleta de dados foi feita a partir de entrevistas, grupos focais, questionários online e documentos. Os dados primários foram tratados mediante análise de conteúdo categorial e os dados secundários, por meio da análise documental. As observações elaboradas, dentro de uma perspectiva descritivo-interpretativa e de um corte seccional permitem induções acerca das configurações das gestões em cada universidade. Evidencia-se relação forte e direta entre as configurações das gestões do curso piloto e os respectivos resultados no Enade. Constata-se que a teoria da industrialização de Otto Peters e o modelo de educação a distância baseado na distribuição em polos podem explicar a estruturação traduzida no modus operandi da Universidade Aberta do Brasil. Assim, conclui-se que as diferenças nos resultados no Enade dos estudantes da Uema, UEPB e UFC têm relação direta com o modo de estruturação dos setores responsáveis pela intermediação da EAD nessas universidades. Constata-se que o curso piloto da UAB na UEPB não demonstrou ter a aderência necessária aos referenciais de qualidade do MEC, enquanto a Uema e a UFC demonstraram grande adequação aos critérios estabelecidos, o que confirma a relação entre tais referenciais de qualidade e o resultado no Enade. Conclui-se, ainda, que os componentes avaliação e equipe multidisciplinar foram os que mais sugestionaram relação com os desempenhos dos estudantes no Enade. Os resultados encontrados foram discutidos à luz do referencial teórico revisado e foram apresentadas recomendações derivadas dos achados desta pesquisa.