887 resultados para network cost models
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
In this dissertation, we apply mathematical programming techniques (i.e., integer programming and polyhedral combinatorics) to develop exact approaches for influence maximization on social networks. We study four combinatorial optimization problems that deal with maximizing influence at minimum cost over a social network. To our knowl- edge, all previous work to date involving influence maximization problems has focused on heuristics and approximation. We start with the following viral marketing problem that has attracted a significant amount of interest from the computer science literature. Given a social network, find a target set of customers to seed with a product. Then, a cascade will be caused by these initial adopters and other people start to adopt this product due to the influence they re- ceive from earlier adopters. The idea is to find the minimum cost that results in the entire network adopting the product. We first study a problem called the Weighted Target Set Selection (WTSS) Prob- lem. In the WTSS problem, the diffusion can take place over as many time periods as needed and a free product is given out to the individuals in the target set. Restricting the number of time periods that the diffusion takes place over to be one, we obtain a problem called the Positive Influence Dominating Set (PIDS) problem. Next, incorporating partial incentives, we consider a problem called the Least Cost Influence Problem (LCIP). The fourth problem studied is the One Time Period Least Cost Influence Problem (1TPLCIP) which is identical to the LCIP except that we restrict the number of time periods that the diffusion takes place over to be one. We apply a common research paradigm to each of these four problems. First, we work on special graphs: trees and cycles. Based on the insights we obtain from special graphs, we develop efficient methods for general graphs. On trees, first, we propose a polynomial time algorithm. More importantly, we present a tight and compact extended formulation. We also project the extended formulation onto the space of the natural vari- ables that gives the polytope on trees. Next, building upon the result for trees---we derive the polytope on cycles for the WTSS problem; as well as a polynomial time algorithm on cycles. This leads to our contribution on general graphs. For the WTSS problem and the LCIP, using the observation that the influence propagation network must be a directed acyclic graph (DAG), the strong formulation for trees can be embedded into a formulation on general graphs. We use this to design and implement a branch-and-cut approach for the WTSS problem and the LCIP. In our computational study, we are able to obtain high quality solutions for random graph instances with up to 10,000 nodes and 20,000 edges (40,000 arcs) within a reasonable amount of time.
Resumo:
Resolution of multisensory deficits has been observed in teenagers with Autism Spectrum Disorders (ASD) for complex, social speech stimuli; this resolution extends to more basic multisensory processing, involving low-level stimuli. In particular, a delayed transition of multisensory integration (MSI) from a default state of competition to one of facilitation has been observed in ASD children. In other terms, the complete maturation of MSI is achieved later in ASD. In the present study a neuro-computational model is used to reproduce some patterns of behavior observed experimentally, modeling a bisensory reaction time task, in which auditory and visual stimuli are presented in random sequence alone (A or V) or together (AV). The model explains how the default competitive state can be implemented via mutual inhibition between primary sensory areas, and how the shift toward the classical multisensory facilitation, observed in adults, is the result of inhibitory cross-modal connections becoming excitatory during the development. Model results are consistent with a stronger cross-modal inhibition in ASD children, compared to normotypical (NT) ones, suggesting that the transition toward a cooperative interaction between sensory modalities takes longer to occur. Interestingly, the model also predicts the difference between unisensory switch trials (in which sensory modality switches) and unisensory repeat trials (in which sensory modality repeats). This is due to an inhibitory mechanism, characterized by a slow dynamics, driven by the preceding stimulus and inhibiting the processing of the incoming one, when of the opposite sensory modality. These findings link the cognitive framework delineated by the empirical results to a plausible neural implementation.
Resumo:
Today more than ever, with the recent war in Ukraine and the increasing number of attacks that affect systems of nations and companies every day, the world realizes that cybersecurity can no longer be considered just as a “cost”. It must become a pillar for our infrastructures that involve the security of our nations and the safety of people. Critical infrastructure, like energy, financial services, and healthcare, have become targets of many cyberattacks from several criminal groups, with an increasing number of resources and competencies, putting at risk the security and safety of companies and entire nations. This thesis aims to investigate the state-of-the-art regarding the best practice for securing Industrial control systems. We study the differences between two security frameworks. The first is Industrial Demilitarized Zone (I-DMZ), a perimeter-based security solution. The second one is the Zero Trust Architecture (ZTA) which removes the concept of perimeter to offer an entirely new approach to cybersecurity based on the slogan ‘Never Trust, always verify’. Starting from this premise, the Zero Trust model embeds strict Authentication, Authorization, and monitoring controls for any access to any resource. We have defined two architectures according to the State-of-the-art and the cybersecurity experts’ guidelines to compare I-DMZ, and Zero Trust approaches to ICS security. The goal is to demonstrate how a Zero Trust approach dramatically reduces the possibility of an attacker penetrating the network or moving laterally to compromise the entire infrastructure. A third architecture has been defined based on Cloud and fog/edge computing technology. It shows how Cloud solutions can improve the security and reliability of infrastructure and production processes that can benefit from a range of new functionalities, that the Cloud could offer as-a-Service.We have implemented and tested our Zero Trust solution and its ability to block intrusion or attempted attacks.
Resumo:
Disconnectivity between the Default Mode Network (DMN) nodes can cause clinical symptoms and cognitive deficits in Alzheimer׳s disease (AD). We aimed to examine the structural connectivity between DMN nodes, to verify the extent in which white matter disconnection affects cognitive performance. MRI data of 76 subjects (25 mild AD, 21 amnestic Mild Cognitive Impairment subjects and 30 controls) were acquired on a 3.0T scanner. ExploreDTI software (fractional Anisotropy threshold=0.25 and the angular threshold=60°) calculated axial, radial, and mean diffusivities, fractional anisotropy and streamline count. AD patients showed lower fractional anisotropy (P=0.01) and streamline count (P=0.029), and higher radial diffusivity (P=0.014) than controls in the cingulum. After correction for white matter atrophy, only fractional anisotropy and radial diffusivity remained significantly lower in AD compared to controls (P=0.003 and P=0.05). In the parahippocampal bundle, AD patients had lower mean and radial diffusivities (P=0.048 and P=0.013) compared to controls, from which only radial diffusivity survived for white matter adjustment (P=0.05). Regression models revealed that cognitive performance is also accounted for by white matter microstructural values. Structural connectivity within the DMN is important to the execution of high-complexity tasks, probably due to its relevant role in the integration of the network.
Resumo:
To evaluate the occurrence of severe obstetric complications associated with antepartum and intrapartum hemorrhage among women from the Brazilian Network for Surveillance of Severe Maternal Morbidity. Multicenter cross-sectional study. Twenty-seven obstetric referral units in Brazil between July 2009 and June 2010. A total of 9555 women categorized as having obstetric complications. The occurrence of potentially life-threatening conditions, maternal near miss and maternal deaths associated with antepartum and intrapartum hemorrhage was evaluated. Sociodemographic and obstetric characteristics and the use of criteria for management of severe bleeding were also assessed in these women. The prevalence ratios with their respective 95% confidence intervals adjusted for the cluster effect of the design, and multiple logistic regression analysis were performed to identify factors independently associated with the occurrence of severe maternal outcome. Antepartum and intrapartum hemorrhage occurred in only 8% (767) of women experiencing any type of obstetric complication. However, it was responsible for 18.2% (140) of maternal near miss and 10% (14) of maternal death cases. On multivariate analysis, maternal age and previous cesarean section were shown to be independently associated with an increased risk of severe maternal outcome (near miss or death). Severe maternal outcome due to antepartum and intrapartum hemorrhage was highly prevalent among Brazilian women. Certain risk factors, maternal age and previous cesarean delivery in particular, were associated with the occurrence of bleeding.
Resumo:
OBJETIVO: Analisar práticas de atenção domiciliar de serviços ambulatoriais e hospitalares e sua constituição como rede substitutiva de cuidado em saúde. PROCEDIMENTOS METODOLÓGICOS: Estudo qualitativo que analisou, com base na metodologia de caso traçador, quatro serviços ambulatoriais de atenção domiciliar da Secretaria Municipal de Saúde e um serviço de um hospital filantrópico do município de Belo Horizonte, MG, entre 2005 e 2007. Foram realizadas entrevistas com gestores e equipes dos serviços de atenção domiciliar, análise de documentos e acompanhamento de casos com entrevistas a pacientes e cuidadores. A análise foi orientada pelas categorias analíticas integração da atenção domiciliar na rede de saúde e modelo tecnoassistencial. ANÁLISE DOS RESULTADOS: A implantação da atenção domiciliar foi precedida por decisão político-institucional tanto com orientação racionalizadora, buscando a diminuição de custos, quanto com vistas à reordenação tecnoassistencial das redes de cuidados. Essas duas orientações encontram-se em disputa e constituem dificuldades para conciliação dos interesses dos diversos atores envolvidos na rede e na criação de espaços compartilhados de gestão. Pôde-se identificar a inovação tecnológica e a autonomia das famílias na implementação dos projetos de cuidado. As equipes mostraram-se coesas, construindo no cotidiano do trabalho novas formas de integrar os diferentes olhares para transformação das práticas em saúde. Foram observados desafios na proposta de integrar os diferentes serviços de caráter substitutivo do cuidado ao limitar a capacidade da atenção domiciliar de mudar o modelo tecnoassistencial. CONCLUSÕES: A atenção domiciliar possui potencial para constituição de uma rede substitutiva ao produzir novos modos de cuidar que atravessam os projetos dos usuários, dos familiares, da rede social e dos trabalhadores da atenção domiciliar. A atenção domiciliar como modalidade substitutiva de atenção à saúde requer sustentabilidade política, conceitual e operacional, bem como reconhecimento dos novos arranjos e articulação das propostas em curso.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Oscillator networks have been developed in order to perform specific tasks related to image processing. Here we analytically investigate the existence of synchronism in a pair of phase oscillators that are short-range dynamically coupled. Then, we use these analytical results to design a network able of detecting border of black-and-white figures. Each unit composing this network is a pair of such phase oscillators and is assigned to a pixel in the image. The couplings among the units forming the network are also dynamical. Border detection emerges from the network activity.
Resumo:
Local food diversity and traditional crops are essential for cost-effective management of the global epidemic of type 2 diabetes and associated complications of hypertension. Water and 12% ethanol extracts of native Peruvian fruits such as Lucuma (Pouteria lucuma), Pacae (Inga feuille), Papayita arequipena (Carica pubescens), Capuli (Prunus capuli), Aguaymanto (Physalis peruviana), and Algarrobo (Prosopis pallida) were evaluated for total phenolics, antioxidant activity based on 2, 2-diphenyl-1-picrylhydrazyl radical scavenging assay, and functionality such as in vitro inhibition of alpha-amylase, alpha-glucosidase, and angiotensin I-converting enzyme (ACE) relevant for potential management of hyperglycemia and hypertension linked to type 2 diabetes. The total phenolic content ranged from 3.2 (Aguaymanto) to 11.4 (Lucuma fruit) mg/g of sample dry weight. A significant positive correlation was found between total phenolic content and antioxidant activity for the ethanolic extracts. No phenolic compound was detected in Lucuma (fruit and powder) and Pacae. Aqueous extracts from Lucuma and Algarrobo had the highest alpha-glucosidase inhibitory activities. Papayita arequipena and Algarrobo had significant ACE inhibitory activities reflecting antihypertensive potential. These in vitro results point to the excellent potential of Peruvian fruits for food-based strategies for complementing effective antidiabetes and antihypertension solutions based on further animal and clinical studies.
Resumo:
This article focuses on the identification of the number of paths with different lengths between pairs of nodes in complex networks and how these paths can be used for characterization of topological properties of theoretical and real-world complex networks. This analysis revealed that the number of paths can provide a better discrimination of network models than traditional network measurements. In addition, the analysis of real-world networks suggests that the long-range connectivity tends to be limited in these networks and may be strongly related to network growth and organization.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
This article discusses possible approaches for optical network capacity upgrade by considering the use of different modulation formats at 40 Gb/s. First, a performance evaluation is carried out regarding tolerance to three impairments: spectral narrowing due to filter cascading, chromatic dispersion, and self-phase modulation. Next, a cost-benefit analysis is conducted, considering the specific optoelectronic components required for the optical transmitter/receiver configuration of each format.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.