942 resultados para categorization IT PFC computational neuroscience model HMAX
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Simple Multi-Attribute Rating Technique (SMART). The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.
Resumo:
Measuring the extent to which a piece of structural timber has distorted at a macroscopic scale is fundamental to assessing its viability as a structural component. From the sawmill to the construction site, as structural timber dries, distortion can render it unsuitable for its intended purposes. This rejection of unusable timber is a considerable source of waste to the timber industry and the wider construction sector. As such, ensuring accurate measurement of distortion is a key step in addressing ineffciencies within timber processing. Currently, the FRITS frame method is the established approach used to gain an understanding of timber surface profile. The method, while reliable, is dependent upon relatively few measurements taken across a limited area of the overall surface, with a great deal of interpolation required. Further, the process is unavoidably slow and cumbersome, the immobile scanning equipment limiting where and when measurements can be taken and constricting the process as a whole. This thesis seeks to introduce LiDAR scanning as a new, alternative approach to distortion feature measurement. In its infancy as a measurement technique within timber research, the practicalities of using LiDAR scanning as a measurement method are herein demonstrated, exploiting many of the advantages the technology has over current approaches. LiDAR scanning creates a much more comprehensive image of a timber surface, generating input data multiple magnitudes larger than that of the FRITS frame. Set-up and scanning time for LiDAR is also much quicker and more flexible than existing methods. With LiDAR scanning the measurement process is freed from many of the constraints of the FRITS frame and can be done in almost any environment. For this thesis, surface scans were carried out on seven Sitka spruce samples of dimensions 48.5x102x3000mm using both the FRITS frame and LiDAR scanner. The samples used presented marked levels of distortion and were relatively free from knots. A computational measurement model was created to extract feature measurements from the raw LiDAR data, enabling an assessment of each piece of timber to be carried out in accordance with existing standards. Assessment of distortion features focused primarily on the measurement of twist due to its strong prevalence in spruce and the considerable concern it generates within the construction industry. Additional measurements of surface inclination and bow were also made with each method to further establish LiDAR's credentials as a viable alternative. Overall, feature measurements as generated by the new LiDAR method compared well with those of the established FRITS method. From these investigations recommendations were made to address inadequacies within existing measurement standards, namely their reliance on generalised and interpretative descriptions of distortion. The potential for further uses of LiDAR scanning within timber researches was also discussed.
Resumo:
One way to achieve amplification of distal synaptic inputs on a dendritic tree is to scale the amplitude and/or duration of the synaptic conductance with its distance from the soma. This is an example of what is often referred to as “dendritic democracy”. Although well studied experimentally, to date this phenomenon has not been thoroughly explored from a mathematical perspective. In this paper we adopt a passive model of a dendritic tree with distributed excitatory synaptic conductances and analyze a number of key measures of democracy. In particular, via moment methods we derive laws for the transport, from synapse to soma, of strength, characteristic time, and dispersion. These laws lead immediately to synaptic scalings that overcome attenuation with distance. We follow this with a Neumann approximation of Green’s representation that readily produces the synaptic scaling that democratizes the peak somatic voltage response. Results are obtained for both idealized geometries and for the more realistic geometry of a rat CA1 pyramidal cell. For each measure of democratization we produce and contrast the synaptic scaling associated with treating the synapse as either a conductance change or a current injection. We find that our respective scalings agree up to a critical distance from the soma and we reveal how this critical distance decreases with decreasing branch radius.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Química, Programa de Pós-Graduação em Química, 2016.
Resumo:
The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.
Resumo:
Study Objectives. The use of mouse models in sleep apnea research is limited by the belief that central (CSA) but not obstructive sleep apneas (OSA) occur in rodents. With this study we wanted to develop a protocol to look for the presence of OSAs in wild-type mice and, then, to apply it to a mouse model of Down Syndrome (DS), a human pathology characterized by a high incidence of OSAs. Methods. Nine C57Bl/6J wild-type mice were implanted with electrodes for electroencephalography (EEG), neck electromyography (nEMG), diaphragmatic activity (DIA) and then placed in a whole-body-plethysmographic (WBP) chamber for 8h during the resting (light) phase to simultaneously record sleep and breathing activity. The concomitant analysis of WBP and DIA signals allowed the discrimination between CSA and OSA. The same protocol was then applied to 12 Ts65Dn mice (a validated model of DS) and 14 euploid controls. Results. OSAs represented about half of the apneic events recorded during rapid-eye-movement sleep (REMS) in each experimental group while almost only CSAs were found during non-REMS. Ts65Dn mice had similar rate of apneic events than euploid controls but a significantly higher occurrence of OSAs during REMS. Conclusions. We demonstrated for the first time that mice physiologically exhibit both CSAs and OSAs and that the latter are more prevalent in the Ts65Dn mouse model of DS. These findings indicate that mice can be used as a valid tool to accelerate the comprehension of the pathophysiology of all kind of sleep apnea and for the development of new therapeutical approaches to contrast these respiratory disorders.
Resumo:
The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.
Resumo:
The 1d extended Hubbard model with soft-shoulder potential has proved itself
to be very difficult to study due its non solvability and to competition between terms of the Hamiltonian. Given this, we tried to investigate its phase diagram for filling n=2/5 and range of soft-shoulder potential r=2 by using Machine Learning techniques. That led to a rich phase diagram; calling U, V the parameters associated to the Hubbard potential and the soft-shoulder potential respectively, we found that for V<5 and U>3 the system is always in Tomonaga Luttinger Liquid phase, then becomes a Cluster Luttinger Liquid for 5
Resumo:
Questa tesi, dal titolo “Cybersecurity Capability Maturity Model (C2M2 v 2.0)” si pone l’obbiettivo di studiare, analizzare, applicare e mostrare punti di forza e criticità di un modello atto a valutare la propria postura di cybersicurezza, al fine di migliorarne i punti critici, trovarne le priorità in cui investire e strutturare un security program integrato in tutti i processi aziendali.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Time availability is a key concept in relation to volunteering, leading to organisations and governments targeting those outside paid work as a potential source of volunteers. It may be that factors such as a growth in female participation in the labour market and an increase in work hours will lead to more people saying they are simply too busy to volunteer This paper discusses how social and economic change, such as changing work patterns, are impacting on time availability. Using the 1997 ABS Time Use data, it identifies a predictive model of spare time by looking at demographic, life stage and employment related variables. Results confirm that those outside paid work, particularly the young, males and those without partners or children, are the groups most likely to have time to spare. These groups do not currently report high rates of volunteering. The paper concludes by questioning the premise that people will volunteer simply because they have time to spare. This is just one component of a range of motivations and factors that influence the decision to volunteer.
Resumo:
This article attempts to elucidate one of the mechanisms that link trade barriers, in the form of port costs, and subsequent growth and regional inequality. Prior attention has focused on inland or link costs, but port costs can be considered as a further barrier to enhancing trade liberalization and growth. In contrast to a highway link, congestion at a port may have severe impacts that are spread over space and time whereas highway link congestion may be resolved within several hours. Since a port is part of the transportation network, any congestion/disruption is likely to ripple throughout the hinterland. In this sense, it is important to model properly the role nodal components play in the context of spatial models and international trade. In this article, a spatial computable general equilibrium (CGE) model that is integrated to a transport network system is presented to simulate the impacts of increases in port efficiency in Brazil. The role of ports of entry and ports of exit are explicitly considered to grasp the holistic picture in an integrated interregional system. Measures of efficiency for different port locations are incorporated in the calibration of the model and used as the benchmark in our simulations. Three scenarios are evaluated: (1) an overall increase in port efficiency in Brazil to achieve international standards; (2) efficiency gains associated with decentralization in port management in Brazil; and (3) regionally differentiated increases in port efficiency to reach the boundary of the national efficiency frontier.