818 resultados para Parameter setting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines assumptions about future prices used in real estate applications of DCF models. We confirm both the widespread reliance on an ad hoc rule of increasing period-zero capitalization rates by 50 to 100 basis points to obtain terminal capitalization rates and the inability of the rule to project future real estate pricing. To understand how investors form expectations about future prices, we model the spread between the contemporaneously period-zero going-in and terminal capitalization rates and the spread between terminal rates assigned in period zero and going-in rates assigned in period N. Our regression results confirm statistical relationships between the terminal and next holding period going-in capitalization rate spread and the period-zero discount rate, although other economically significant variables are statistically insignificant. Linking terminal capitalization rates by assumption to going-in capitalization rates implies investors view future real estate pricing with myopic expectations. We discuss alternative specifications devoid of such linkage that align more with a rational expectations view of future real estate pricing.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this study, it was discussed the efficiency criteria in each of the elements that compose a central pivot, and this analysis was applied to two sets of systems located in regions of Cruz Alta and Santo Augusto, state of Rio Grande do Sul, Brazil. The methodology used combines water and energy assessment through an indicator called Normalized Specific Consumption in Irrigation (C ENI), allowing thus a comparison between equipment and projects. The C ENI in Cruz Alta region showed 72% of the equipment above the standard (8.68 kWh mm-1 ha-1 100m-1), and in Santo Augusto region 64.28% with consumption above the standard. The mean irrigation efficiency for Cruz Alta region was 29.85%, with standard deviation of 5.41%, and for Santo Augusto region, it was 29.02%, with standard deviation of 5.15%.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In recent years, UK industry has seen an explosive growth in the number of `Computer Aided Production Management' (CAPM) system installations. Of the many CAPM systems, materials requirement planning/manufacturing resource planning (MRP/MRPII) is the most widely implemented. Despite the huge investments in MRP systems, over 80 percent are said to have failed within 3 to 5 years of installation. Many people now assume that Just-In-Time (JIT) is the best manufacturing technique. However, those who have implemented JIT have found that it also has many problems. The author argues that the success of a manufacturing company will not be due to a system which complies with a single technique; but due to the integration of many techniques and the ability to make them complement each other in a specific manufacturing environment. This dissertation examines the potential for integrating MRP with JIT and Two-Bin systems to reduce operational costs involved in managing bought-out inventory. Within this framework it shows that controlling MRP is essential to facilitate the integrating process. The behaviour of MRP systems is dependent on the complex interactions between the numerous control parameters used. Methodologies/models are developed to set these parameters. The models are based on the Pareto principle. The idea is to use business targets to set a coherent set of parameters, which not only enables those business targets to be realised, but also facilitates JIT implementation. It illustrates this approach in the context of an actual manufacturing plant - IBM Havant. (IBM Havant is a high volume electronics assembly plant with the majority of the materials bought-out). The parameter setting models are applicable to control bought-out items in a wide range of industries and are not dependent on specific MRP software. The models have produced successful results in several companies and are now being developed as commercial products.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: The topographical features of intraradicular dentine pretreated with sodium hypochlorite (NaOCl) or ethylenediamine tetraacetic acid (EDTA) followed by diode laser irradiation have not yet been determined. Purpose: To evaluate the alterations of dentine irradiated with 980-nm diode laser at different parameters after the surface treatment with NaOCl and EDTA. Study design: Roots of 60 canines were biomechanically prepared and irrigated with NaOCl or EDTA. Groups were divided according to the laser parameters: 1.5 W/CW; 1.5 W/100 Hz; 3.0 W/CW; 3.0 W/100 Hz and no irradiation (control). The roots were splited longitudinally and analyzed by scanning electron microscopy (SEM) in a quali-quatitative way. The scores were submitted to two-way Kruskal-Wallis and Dunn`s tests. Results: The statistical analysis demonstrated that the specimens treated only with NaOCl or EDTA (control groups) were statistically different (P < 0.05) from the laser-irradiated specimens, regardless of the parameter setting. The specimens treated with NaOCl showed a laser-modified surface with smear layer, fissures, and no visible tubules. Those treated with EDTA and irradiated by laser presented absence of smear layer, tubules partially exposed and melting areas. Conclusions: The tested parameters of 980-nm diode laser promoted similar alterations on dentine morphology, dependent to the type of surface pretreatment. Microsc. Res. Tech. 72:22-27, 2009. (C) 2008 Wiley-Liss, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Symbolic Aggregate Approximation (iSAX) is widely used in time series data mining. Its popularity arises from the fact that it largely reduces time series size, it is symbolic, allows lower bounding and is space efficient. However, it requires setting two parameters: the symbolic length and alphabet size, which limits the applicability of the technique. The optimal parameter values are highly application dependent. Typically, they are either set to a fixed value or experimentally probed for the best configuration. In this work we propose an approach to automatically estimate iSAX’s parameters. The approach – AutoiSAX – not only discovers the best parameter setting for each time series in the database, but also finds the alphabet size for each iSAX symbol within the same word. It is based on simple and intuitive ideas from time series complexity and statistics. The technique can be smoothly embedded in existing data mining tasks as an efficient sub-routine. We analyze its impact in visualization interpretability, classification accuracy and motif mining. Our contribution aims to make iSAX a more general approach as it evolves towards a parameter-free method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thèse réalisée en cotutelle entre l'Université de Montréal et l'Université de Technologie de Troyes

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents an adaptive tuning system that can be described as a dynamic Just Intonation tuning system, being compatible with equally tempered instruments. The tuning system is called Hermode Tuning (HMT) and the tuning used as comparison for evaluation is the standardized western tuning, the equal tempered tuning. This study investigates preferences for these two musical tuning systems, depending on whether the tunings are presented on a piano or with woodwind instruments. A listening test was done with students at the Falun Conservatory of Music, including both a vertical listening (intervalls) and a horizontal listening (cadenses and musical compositions) of Hermode tuned musical material. Overall the results showed no significant preferences for either tuning system irrespectively of what instrument it was presented with. The clearest results was that of a misjudged just intonated perfect third on the piano and a preference for an adaptively tuned piano presented in a simple harmonic structure, with a parameter setting of HMT 70%. Materials for comparison was partly taken from Hermode´s own website, but overall the attitude towards these sequenses (using a likert scale of one to five) showed a low expected value. This shows the complexity of the topic and no general conclusions regarding the choice of intonation or tuning system could be done for the presented material.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a random intercept Poisson model in which the random effect is assumed to follow a generalized log-gamma (GLG) distribution. This random effect accommodates (or captures) the overdispersion in the counts and induces within-cluster correlation. We derive the first two moments for the marginal distribution as well as the intraclass correlation. Even though numerical integration methods are, in general, required for deriving the marginal models, we obtain the multivariate negative binomial model from a particular parameter setting of the hierarchical model. An iterative process is derived for obtaining the maximum likelihood estimates for the parameters in the multivariate negative binomial model. Residual analysis is proposed and two applications with real data are given for illustration. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las metodologías de desarrollo ágiles han sufrido un gran auge en entornos industriales durante los últimos años debido a la rapidez y fiabilidad de los procesos de desarrollo que proponen. La filosofía DevOps y específicamente las metodologías derivadas de ella como Continuous Delivery o Continuous Deployment promueven la gestión completamente automatizada del ciclo de vida de las aplicaciones, desde el código fuente a las aplicaciones ejecutándose en entornos de producción. La automatización se ve como un medio para producir procesos repetibles, fiables y rápidos. Sin embargo, no todas las partes de las metodologías Continuous están completamente automatizadas. En particular, la gestión de la configuración de los parámetros de ejecución es un problema que ha sido acrecentado por la elasticidad y escalabilidad que proporcionan las tecnologías de computación en la nube. La mayoría de las herramientas de despliegue actuales pueden automatizar el despliegue de la configuración de parámetros de ejecución, pero no ofrecen soporte a la hora de fijar esos parámetros o de validar los ficheros que despliegan, principalmente debido al gran abanico de opciones de configuración y el hecho de que el valor de muchos de esos parámetros es fijado en base a preferencias expresadas por el usuario. Esto hecho hace que pueda parecer que cualquier solución al problema debe estar ajustada a una aplicación específica en lugar de ofrecer una solución general. Con el objetivo de solucionar este problema, propongo un modelo de configuración que puede ser inferido a partir de instancias de configuración existentes y que puede reflejar las preferencias de los usuarios para ser usado para facilitar los procesos de configuración. El modelo de configuración puede ser usado como la base de un proceso de configuración interactivo capaz de guiar a un operador humano a través de la configuración de una aplicación para su despliegue en un entorno determinado o para detectar cambios de configuración automáticamente y producir una configuración válida que se ajuste a esos cambios. Además, el modelo de configuración debería ser gestionado como si se tratase de cualquier otro artefacto software y debería ser incorporado a las prácticas de gestión habituales. Por eso también propongo un modelo de gestión de servicios que incluya información relativa a la configuración de parámetros de ejecución y que además es capaz de describir y gestionar propuestas arquitectónicas actuales tales como los arquitecturas de microservicios. ABSTRACT Agile development methodologies have risen in popularity within the industry in recent years due to the speed and reliability of the processes they propose. The DevOps philosophy and specifically the methodologies derived from it such as Continuous Delivery and Continuous Deployment push for a totally automated management of the application lifecycle, from the source code to the software running in production environment. Automation in this regard is used as a means to produce repeatable, reliable and fast processes. However, not all parts of the Continuous methodologies are completely automatized. In particular, management of runtime parameter configuration is a problem that has increased its impact in deployment process due to the scalability and elasticity provided by cloud technologies. Most deployment tools nowadays can automate the deployment of runtime parameter configuration, but they offer no support for parameter setting o configuration validation, as the range of different configuration options and the fact that the value of many of those parameters is based on user preference seems to imply that any solution to the problem will have to be tailored to a specific application. With the aim to solve this problem I propose a configuration model that can be inferred from existing configurations and reflect user preferences in order to ease the configuration process. The configuration model can be used as the base of an interactive configuration process capable of guiding a human operator through the configuration of an application for its deployment in a specific environment or to automatically detect configuration changes and produce valid runtime parameter configurations that take into account those changes. Additionally, the configuration model should be managed as any other software artefact and should be incorporated into current management practices. I also propose a service management model that includes the configuration information and that is able to describe and manage current architectural practices such as the microservices architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electrical compound action potentials (ECAPs) of the cochlear nerve are used clinically for quick and efficient cochlear implant parameter setting. The ECAP is the aggregate response of nerve fibres at various distances from the recording electrode, and the magnitude of the ECAP is therefore related to the number of fibres excited by a particular stimulus. Current methods, such as the masker-probe or alternating polarity methods, use the ECAP magnitude at various stimulus levels to estimate the neural threshold, from which the parameters are calculated. However, the correlation between ECAP threshold and perceptual threshold is not always good, with ECAP threshold typically being much higher than perceptual threshold. The lower correlation is partly due to the very different pulse rates used for ECAPs (below 100 Hz) and clinical programs (hundreds of Hz up to several kHz). Here we introduce a new method of estimating ECAP threshold for cochlear implants based upon the variability of the response. At neural threshold, where some but not all fibers respond, there is a different response each trial. This inter-trial variability can be detected overlaying the constant variability of the system noise. The large stimulus artefact, which requires additional trials for artefact rejection in the standard ECAP magnitude methods, is not consequential, as it has little variability. The variability method therefore consists of simply presenting a pulse and recording the ECAP, and as such is quicker than other methods. It also has the potential to be run at high rates like clinical programs, potentially improving the correlation with behavioural threshold. Preliminary data is presented that shows a detectable variability increase shortly after probe offset, at probe levels much lower than those producing a detectable ECAP magnitude. Care must be taken, however, to avoid saturation of the recording amplifier saturation; in our experiments we found a gain of 300 to be optimal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the features of low-power and flexible networking capabilities IEEE 802.15.4 has been widely regarded as one strong candidate of communication technologies for wireless sensor networks (WSNs). It is expected that with an increasing number of deployments of 802.15.4 based WSNs, multiple WSNs could coexist with full or partial overlap in residential or enterprise areas. As WSNs are usually deployed without coordination, the communication could meet significant degradation with the 802.15.4 channel access scheme, which has a large impact on system performance. In this thesis we are motivated to investigate the effectiveness of 802.15.4 networks supporting WSN applications with various environments, especially when hidden terminals are presented due to the uncoordinated coexistence problem. Both analytical models and system level simulators are developed to analyse the performance of the random access scheme specified by IEEE 802.15.4 medium access control (MAC) standard for several network scenarios. The first part of the thesis investigates the effectiveness of single 802.15.4 network supporting WSN applications. A Markov chain based analytic model is applied to model the MAC behaviour of IEEE 802.15.4 standard and a discrete event simulator is also developed to analyse the performance and verify the proposed analytical model. It is observed that 802.15.4 networks could sufficiently support most WSN applications with its various functionalities. After the investigation of single network, the uncoordinated coexistence problem of multiple 802.15.4 networks deployed with communication range fully or partially overlapped are investigated in the next part of the thesis. Both nonsleep and sleep modes are investigated with different channel conditions by analytic and simulation methods to obtain the comprehensive performance evaluation. It is found that the uncoordinated coexistence problem can significantly degrade the performance of 802.15.4 networks, which is unlikely to satisfy the QoS requirements for many WSN applications. The proposed analytic model is validated by simulations which could be used to obtain the optimal parameter setting before WSNs deployments to eliminate the interference risks.