944 resultados para Mixed binary linear programming
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
This thesis is part of the fields of Material Physics and Organic Electronics and aims to determine the charge carrier density and mobility in the hydrated conducting polymer–polyelectrolyte blend PEDOT:PSS. This kind of material combines electronic semiconductor functionality with selective ionic transport, biocompatibility and electrochemical stability in water. This advantageous material properties combination makes PEDOT:PSS a unique material to build organic electrochemical transistors (OECTs), which have relevant application as amplifying transducers for bioelectronic signals. In order to measure charge carrier density and mobility, an innovative 4-wire, contact independent characterization technique was introduced, the electrolyte-gated van der Pauw (EgVDP) method, which was combined with electrochemical impedance spectroscopy. The technique was applied to macroscopic thin film samples and micro-structured PEDOT:PSS thin film devices fabricated using photolithography. The EgVDP method revealed to be effective for the measurements of holes’ mobility in hydrated PEDOT:PSS thin films, which resulted to be <μ>=(0.67±0.02) cm^2/(V*s). By comparing this result with 2-point-probe measurements, we found that contact resistance effects led to a mobility overestimation in the latter. Ion accumulation at the drain contact creates a gate-dependent potential barrier and is discussed as a probable reason for the overestimation in 2-point-probe measurements. The measured charge transport properties of PEDOT:PSS were analyzed in the framework of an extended drift-diffusion model. The extended model fits well also to the non-linear response in the transport characterization and results suggest a Gaussian DOS for PEDOT:PSS. The PEDOT:PSS-electrolyte interface capacitance resulted to be voltage-independent, confirming the hypothesis of its morphological origin, related to the separation between the electronic (PEDOT) and ionic (PSS) phases in the blend.
Resumo:
The study of ancient, undeciphered scripts presents unique challenges, that depend both on the nature of the problem and on the peculiarities of each writing system. In this thesis, I present two computational approaches that are tailored to two different tasks and writing systems. The first of these methods is aimed at the decipherment of the Linear A afraction signs, in order to discover their numerical values. This is achieved with a combination of constraint programming, ad-hoc metrics and paleographic considerations. The second main contribution of this thesis regards the creation of an unsupervised deep learning model which uses drawings of signs from ancient writing system to learn to distinguish different graphemes in the vector space. This system, which is based on techniques used in the field of computer vision, is adapted to the study of ancient writing systems by incorporating information about sequences in the model, mirroring what is often done in natural language processing. In order to develop this model, the Cypriot Greek Syllabary is used as a target, since this is a deciphered writing system. Finally, this unsupervised model is adapted to the undeciphered Cypro-Minoan and it is used to answer open questions about this script. In particular, by reconstructing multiple allographs that are not agreed upon by paleographers, it supports the idea that Cypro-Minoan is a single script and not a collection of three script like it was proposed in the literature. These results on two different tasks shows that computational methods can be applied to undeciphered scripts, despite the relatively low amount of available data, paving the way for further advancement in paleography using these methods.
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.
Resumo:
In this paper, a joint location-inventory model is proposed that simultaneously optimises strategic supply chain design decisions such as facility location and customer allocation to facilities, and tactical-operational inventory management and production scheduling decisions. All this is analysed in a context of demand uncertainty and supply uncertainty. While demand uncertainty stems from potential fluctuations in customer demands over time, supply-side uncertainty is associated with the risk of “disruption” to which facilities may be subject. The latter is caused by external factors such as natural disasters, strikes, changes of ownership and information technology security incidents. The proposed model is formulated as a non-linear mixed integer programming problem to minimise the expected total cost, which includes four basic cost items: the fixed cost of locating facilities at candidate sites, the cost of transport from facilities to customers, the cost of working inventory, and the cost of safety stock. Next, since the optimisation problem is very complex and the number of evaluable instances is very low, a "matheuristic" solution is presented. This approach has a twofold objective: on the one hand, it considers a larger number of facilities and customers within the network in order to reproduce a supply chain configuration that more closely reflects a real-world context; on the other hand, it serves to generate a starting solution and perform a series of iterations to try to improve it. Thanks to this algorithm, it was possible to obtain a solution characterised by a lower total system cost than that observed for the initial solution. The study concludes with some reflections and the description of possible future insights.
Resumo:
The aim was to evaluate the relationship between orofacial function, dentofacial morphology, and bite force in young subjects. Three hundred and sixteen subjects were divided according to dentition stage (early, intermediate, and late mixed and permanent dentition). Orofacial function was screened using the Nordic Orofacial Test-Screening (NOT-S). Orthodontic treatment need, bite force, lateral and frontal craniofacial dimensions and presence of sleep bruxism were also assessed. The results were submitted to descriptive statistics, normality and correlation tests, analysis of variance, and multiple linear regression to test the relationship between NOT-S scores and the studied independent variables. The variance of NOT-S scores between groups was not significant. The evaluation of the variables that significantly contributed to NOT-S scores variation showed that age and presence of bruxism related to higher NOT-S total scores, while the increase in overbite measurement and presence of closed lip posture related to lower scores. Bite force did not show a significant relationship with scores of orofacial dysfunction. No significant correlations between craniofacial dimensions and NOT-S scores were observed. Age and sleep bruxism were related to higher NOT-S scores, while the increase in overbite measurement and closed lip posture contributed to lower scores of orofacial dysfunction.
Resumo:
This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A-- water bath at 74ºC for 9 h; B-- water bath at 74ºC for 8 h and temperature increased to 100ºC for 1 h; C-- water bath at 74ºC for 2 h and temperature increased to 100ºC for 1 h;; and D-- water bath at 120ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 sec was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.
Resumo:
This study investigated the effect of simulated microwave disinfection (SMD) on the linear dimensional changes, hardness and impact strength of acrylic resins under different polymerization cycles. Metal dies with referential points were embedded in flasks with dental stone. Samples of Classico and Vipi acrylic resins were made following the manufacturers' recommendations. The assessed polymerization cycles were: A) water bath at 74 ºC for 9 h; B) water bath at 74 ºC for 8 h and temperature increased to 100 ºC for 1 h; C) water bath at 74 ºC for 2 h and temperature increased to 100 ºC for 1 h; and D) water bath at 120 ºC and pressure of 60 pounds. Linear dimensional distances in length and width were measured after SMD and water storage at 37 ºC for 7 and 30 days using an optical microscope. SMD was carried out with the samples immersed in 150 mL of water in an oven (650 W for 3 min). A load of 25 gf for 10 s was used in the hardness test. Charpy impact test was performed with 40 kpcm. Data were submitted to ANOVA and Tukey's test (5%). The Classico resin was dimensionally steady in length in the A and D cycles for all periods, while the Vipi resin was steady in the A, B and C cycles for all periods. The Classico resin was dimensionally steady in width in the C and D cycles for all periods, and the Vipi resin was steady in all cycles and periods. The hardness values for Classico resin were steady in all cycles and periods, while the Vipi resin was steady only in the C cycle for all periods. Impact strength values for Classico resin were steady in the A, C and D cycles for all periods, while Vipi resin was steady in all cycles and periods. SMD promoted different effects on the linear dimensional changes, hardness and impact strength of acrylic resins submitted to different polymerization cycles when after SMD and water storage were considered.
Resumo:
Patients with myofascial pain experience impaired mastication, which might also interfere with their sleep quality. The purpose of this study was to evaluate the jaw motion and sleep quality of patients with myofascial pain and the impact of a stabilization device therapy on both parameters. Fifty women diagnosed with myofascial pain by the Research Diagnostic Criteria were enrolled. Pain levels (visual analog scale), jaw movements (kinesiography), and sleep quality (Epworth Sleepiness Scale; Pittsburgh Sleep Quality Index) were evaluated before (control) and after stabilization device use. Range of motion (maximum opening, right and left excursions, and protrusion) and masticatory movements during Optosil mastication (opening, closing, and total cycle time; opening and closing angles; and maximum velocity) also were evaluated. Repeated-measures analysis of variance in a generalized linear mixed models procedure was used for statistical analysis (α=.05). At baseline, participants with myofascial pain showed a reduced range of jaw motion and poorer sleep quality. Treatment with a stabilization device reduced pain (P<.001) and increased both mouth opening (P<.001) and anteroposterior movement (P=.01). Also, after treatment, the maximum opening (P<.001) and closing (P=.04) velocities during mastication increased, and improvements in sleep scores for the Pittsburgh Sleep Quality Index (P<.001) and Epworth Sleepiness Scale (P=.04) were found. Myofascial pain impairs jaw motion and quality of sleep; the reduction of pain after the use of a stabilization device improves the range of motion and sleep parameters.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
The use of screening techniques, such as an alternative light source (ALS), is important for finding biological evidence at a crime scene. The objective of this study was to evaluate whether biological fluid (blood, semen, saliva, and urine) deposited on different surfaces changes as a function of the age of the sample. Stains were illuminated with a Megamaxx™ ALS System and photographed with a Canon EOS Utility™ camera. Adobe Photoshop™ was utilized to prepare photographs for analysis, and then ImageJ™ was used to record the brightness values of pixels in the images. Data were submitted to analysis of variance using a generalized linear mixed model with two fixed effects (surface and fluid). Time was treated as a random effect (through repeated measures) with a first-order autoregressive covariance structure. Means of significant effects were compared by the Tukey test. The fluorescence of the analyzed biological material varied depending on the age of the sample. Fluorescence was lower when the samples were moist. Fluorescence remained constant when the sample was dry, up to the maximum period analyzed (60 days), independent of the substrate on which the fluid was deposited, showing the novelty of this study. Therefore, the forensic expert can detect biological fluids at the crime scene using an ALS even several days after a crime has occurred.
Resumo:
The aim of this work was to evaluate the floristic composition, richness, and diversity of the upper and lower strata of a stretch of mixed rain forest near the city of Itaberá, in southeastern Brazil. We also investigated the differences between this conservation area and other stretches of mixed rain forest in southern and southeastern Brazil, as well as other nearby forest formations, in terms of their floristic relationships. For our survey of the upper stratum (diameter at breast height [DBH] > 15 cm), we established 50 permanent plots of 10 × 20 m. Within each of those plots, we designated five, randomly located, 1 × 1 m subplots, in order to survey the lower stratum (total height > 30 cm and DBH < 15 cm). In the upper stratum, we sampled 1429 trees and shrubs, belonging to 134 species, 93 genera, and 47 families. In the lower stratum, we sampled 758 trees and shrubs, belonging to 93 species, 66 genera, and 39 families. In our floristic and phytosociological surveys, we recorded 177 species, belonging to 106 genera and 52 families. The Shannon Diversity Index was 4.12 and 3.5 for the upper and lower strata, respectively. Cluster analysis indicated that nearby forest formations had the strongest floristic influence on the study area, which was therefore distinct from other mixed rain forests in southern Brazil and in the Serra da Mantiqueira mountain range.