976 resultados para Set-Valued Functions
Resumo:
Le Système Stockage de l’Énergie par Batterie ou Batterie de Stockage d’Énergie (BSE) offre de formidables atouts dans les domaines de la production, du transport, de la distribution et de la consommation d’énergie électrique. Cette technologie est notamment considérée par plusieurs opérateurs à travers le monde entier, comme un nouveau dispositif permettant d’injecter d’importantes quantités d’énergie renouvelable d’une part et d’autre part, en tant que composante essentielle aux grands réseaux électriques. De plus, d’énormes avantages peuvent être associés au déploiement de la technologie du BSE aussi bien dans les réseaux intelligents que pour la réduction de l’émission des gaz à effet de serre, la réduction des pertes marginales, l’alimentation de certains consommateurs en source d’énergie d’urgence, l’amélioration de la gestion de l’énergie, et l’accroissement de l’efficacité énergétique dans les réseaux. Cette présente thèse comprend trois étapes à savoir : l’Étape 1 - est relative à l’utilisation de la BSE en guise de réduction des pertes électriques ; l’Étape 2 - utilise la BSE comme élément de réserve tournante en vue de l’atténuation de la vulnérabilité du réseau ; et l’Étape 3 - introduit une nouvelle méthode d’amélioration des oscillations de fréquence par modulation de la puissance réactive, et l’utilisation de la BSE pour satisfaire la réserve primaire de fréquence. La première Étape, relative à l’utilisation de la BSE en vue de la réduction des pertes, est elle-même subdivisée en deux sous-étapes dont la première est consacrée à l’allocation optimale et le seconde, à l’utilisation optimale. Dans la première sous-étape, l’Algorithme génétique NSGA-II (Non-dominated Sorting Genetic Algorithm II) a été programmé dans CASIR, le Super-Ordinateur de l’IREQ, en tant qu’algorithme évolutionniste multiobjectifs, permettant d’extraire un ensemble de solutions pour un dimensionnement optimal et un emplacement adéquat des multiple unités de BSE, tout en minimisant les pertes de puissance, et en considérant en même temps la capacité totale des puissances des unités de BSE installées comme des fonctions objectives. La première sous-étape donne une réponse satisfaisante à l’allocation et résout aussi la question de la programmation/scheduling dans l’interconnexion du Québec. Dans le but de réaliser l’objectif de la seconde sous-étape, un certain nombre de solutions ont été retenues et développées/implantées durant un intervalle de temps d’une année, tout en tenant compte des paramètres (heure, capacité, rendement/efficacité, facteur de puissance) associés aux cycles de charge et de décharge de la BSE, alors que la réduction des pertes marginales et l’efficacité énergétique constituent les principaux objectifs. Quant à la seconde Étape, un nouvel indice de vulnérabilité a été introduit, formalisé et étudié ; indice qui est bien adapté aux réseaux modernes équipés de BES. L’algorithme génétique NSGA-II est de nouveau exécuté (ré-exécuté) alors que la minimisation de l’indice de vulnérabilité proposé et l’efficacité énergétique représentent les principaux objectifs. Les résultats obtenus prouvent que l’utilisation de la BSE peut, dans certains cas, éviter des pannes majeures du réseau. La troisième Étape expose un nouveau concept d’ajout d’une inertie virtuelle aux réseaux électriques, par le procédé de modulation de la puissance réactive. Il a ensuite été présenté l’utilisation de la BSE en guise de réserve primaire de fréquence. Un modèle générique de BSE, associé à l’interconnexion du Québec, a enfin été proposé dans un environnement MATLAB. Les résultats de simulations confirment la possibilité de l’utilisation des puissances active et réactive du système de la BSE en vue de la régulation de fréquence.
Resumo:
Explanation of Minimum Data Set (MDS), implementation of Section Q, overview of the program, local contacts and functions, Referral Agency information, role and assistance provided by Long-Term care Ombudsman
Resumo:
Causal inference with a continuous treatment is a relatively under-explored problem. In this dissertation, we adopt the potential outcomes framework. Potential outcomes are responses that would be seen for a unit under all possible treatments. In an observational study where the treatment is continuous, the potential outcomes are an uncountably infinite set indexed by treatment dose. We parameterize this unobservable set as a linear combination of a finite number of basis functions whose coefficients vary across units. This leads to new techniques for estimating the population average dose-response function (ADRF). Some techniques require a model for the treatment assignment given covariates, some require a model for predicting the potential outcomes from covariates, and some require both. We develop these techniques using a framework of estimating functions, compare them to existing methods for continuous treatments, and simulate their performance in a population where the ADRF is linear and the models for the treatment and/or outcomes may be misspecified. We also extend the comparisons to a data set of lottery winners in Massachusetts. Next, we describe the methods and functions in the R package causaldrf using data from the National Medical Expenditure Survey (NMES) and Infant Health and Development Program (IHDP) as examples. Additionally, we analyze the National Growth and Health Study (NGHS) data set and deal with the issue of missing data. Lastly, we discuss future research goals and possible extensions.
Resumo:
In this paper we consider two sources of enhancement for the meshfree Lagrangian particle method smoothed particle hydrodynamics (SPH) by improving the accuracy of the particle approximation. Namely, we will consider shape functions constructed using: moving least-squares approximation (MLS); radial basis functions (RBF). Using MLS approximation is appealing because polynomial consistency of the particle approximation can be enforced. RBFs further appeal as they allow one to dispense with the smoothing-length - the parameter in the SPH method which governs the number of particles within the support of the shape function. Currently, only ad hoc methods for choosing the smoothing-length exist. We ensure that any enhancement retains the conservative and meshfree nature of SPH. In doing so, we derive a new set of variationally-consistent hydrodynamic equations. Finally, we demonstrate the performance of the new equations on the Sod shock tube problem.
Resumo:
The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
n this paper we deal with the problem of obtaining the set of k-additive measures dominating a fuzzy measure. This problem extends the problem of deriving the set of probabilities dominating a fuzzy measure, an important problem appearing in Decision Making and Game Theory. The solution proposed in the paper follows the line developed by Chateauneuf and Jaffray for dominating probabilities and continued by Miranda et al. for dominating k-additive belief functions. Here, we address the general case transforming the problem into a similar one such that the involved set functions have non-negative Möbius transform; this simplifies the problem and allows a result similar to the one developed for belief functions. Although the set obtained is very large, we show that the conditions cannot be sharpened. On the other hand, we also show that it is possible to define a more restrictive subset, providing a more natural extension of the result for probabilities, such that it is possible to derive any k-additive dominating measure from it.
Resumo:
Transcription by RNA polymerase can induce the formation of hypernegatively supercoiled DNA both in vivo and in vitro. This phenomenon has been explained by a “twin-supercoiled-domain” model of transcription where a positively supercoiled domain is generated ahead of the RNA polymerase and a negatively supercoiled domain behind it. In E. coli cells, transcription-induced topological change of chromosomal DNA is expected to actively remodel chromosomal structure and greatly influence DNA transactions such as transcription, DNA replication, and recombination. In this study, an IPTG-inducible, two-plasmid system was established to study transcription-coupled DNA supercoiling (TCDS) in E. coli topA strains. By performing topology assays, biological studies, and RT-PCR experiments, TCDS in E. coli topA strains was found to be dependent on promoter strength. Expression of a membrane-insertion protein was not needed for strong promoters, although co-transcriptional synthesis of a polypeptide may be required. More importantly, it was demonstrated that the expression of a membrane-insertion tet gene was not sufficient for the production of hypernegatively supercoiled DNA. These phenomenon can be explained by the “twin-supercoiled-domain” model of transcription where the friction force applied to E. coli RNA polymerase plays a critical role in the generation of hypernegatively supercoiled DNA. Additionally, in order to explore whether TCDS is able to greatly influence a coupled DNA transaction, such as activating a divergently-coupled promoter, an in vivo system was set up to study TCDS and its effects on the supercoiling-sensitive leu-500 promoter. The leu-500 mutation is a single A-to-G point mutation in the -10 region of the promoter controlling the leu operon, and the AT to GC mutation is expected to increase the energy barrier for the formation of a functional transcription open complex. Using luciferase assays and RT-PCR experiments, it was demonstrated that transient TCDS, “confined” within promoter regions, is responsible for activation of the coupled transcription initiation of the leu-500 promoter. Taken together, these results demonstrate that transcription is a major chromosomal remodeling force in E. coli cells.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
Logistic regression is a statistical tool widely used for predicting species’ potential distributions starting from presence/absence data and a set of independent variables. However, logistic regression equations compute probability values based not only on the values of the predictor variables but also on the relative proportion of presences and absences in the dataset, which does not adequately describe the environmental favourability for or against species presence. A few strategies have been used to circumvent this, but they usually imply an alteration of the original data or the discarding of potentially valuable information. We propose a way to obtain from logistic regression an environmental favourability function whose results are not affected by an uneven proportion of presences and absences. We tested the method on the distribution of virtual species in an imaginary territory. The favourability models yielded similar values regardless of the variation in the presence/absence ratio. We also illustrate with the example of the Pyrenean desman’s (Galemys pyrenaicus) distribution in Spain. The favourability model yielded more realistic potential distribution maps than the logistic regression model. Favourability values can be regarded as the degree of membership of the fuzzy set of sites whose environmental conditions are favourable to the species, which enables applying the rules of fuzzy logic to distribution modelling. They also allow for direct comparisons between models for species with different presence/absence ratios in the study area. This makes themmore useful to estimate the conservation value of areas, to design ecological corridors, or to select appropriate areas for species reintroductions.
Resumo:
Purpose. To determine the mechanisms predisposing penile fracture as well as the rate of long-term penile deformity and erectile and voiding functions. Methods. All fractures were repaired on an emergency basis via subcoronal incision and absorbable suture with simultaneous repair of eventual urethral lesion. Patients' status before fracture and voiding and erectile functions at long term were assessed by periodic follow-up and phone call. Detailed history included cause, symptoms, and single-question self-report of erectile and voiding functions. Results. Among the 44 suspicious cases, 42 (95.4%) were confirmed, mean age was 34.5 years (range: 18-60), mean follow-up 59.3 months (range 9-155). Half presented the classical triad of audible crack, detumescence, and pain. Heterosexual intercourse was the most common cause (28 patients, 66.7%), followed by penile manipulation (6 patients, 14.3%), and homosexual intercourse (4 patients, 9.5%). Woman on top was the most common heterosexual position (n = 14, 50%), followed by doggy style (n = 8, 28.6%). Four patients (9.5%) maintained the cause unclear. Six (14.3%) patients had urethral injury and two (4.8%) had erectile dysfunction, treated by penile prosthesis and PDE-5i. No patient showed urethral fistula, voiding deterioration, penile nodule/curve or pain. Conclusions. Woman on top was the potentially riskiest sexual position (50%). Immediate surgical treatment warrants long-term very low morbidity.
Resumo:
The first theoretical results of core-valence correlation effects are presented for the infrared wavenumbers and intensities of the BF3 and BCl3 molecules, using (double- and triple-zeta) Dunning core-valence basis sets at the CCSD(T) level. The results are compared with those calculated in the frozen core approximation with standard Dunning basis sets at the same correlation level and with the experimental values. The general conclusion is that the effect of core-valence correlation is, for infrared wavenumbers and intensities, smaller than the effect of adding augmented diffuse functions to the basis set, e.g., cc-pVTZ to aug-cc-pVTZ. Moreover, the trends observed in the data are mainly related to the augmented functions rather than the core-valence functions added to the basis set. The results obtained here confirm previous studies pointing out the large descrepancy between the theoretical and experimental intensities of the stretching mode for BCl3.
Resumo:
Streptococcus sanguinis is a commensal pioneer colonizer of teeth and an opportunistic pathogen of infectious endocarditis. The establishment of S. sanguinis in host sites likely requires dynamic fitting of the cell wall in response to local stimuli. In this study, we investigated the two-component system (TCS) VicRK in S. sanguinis (VicRKSs), which regulates genes of cell wall biogenesis, biofilm formation, and virulence in opportunistic pathogens. A vicK knockout mutant obtained from strain SK36 (SKvic) showed slight reductions in aerobic growth and resistance to oxidative stress but an impaired ability to form biofilms, a phenotype restored in the complemented mutant. The biofilm-defective phenotype was associated with reduced amounts of extracellular DNA during aerobic growth, with reduced production of H2O2, a metabolic product associated with DNA release, and with inhibitory capacity of S. sanguinis competitor species. No changes in autolysis or cell surface hydrophobicity were detected in SKvic. Reverse transcription-quantitative PCR (RT-qPCR), electrophoretic mobility shift assays (EMSA), and promoter sequence analyses revealed that VicR directly regulates genes encoding murein hydrolases (SSA_0094, cwdP, and gbpB) and spxB, which encodes pyruvate oxidase for H2O2 production. Genes previously associated with spxB expression (spxR, ccpA, ackA, and tpK) were not transcriptionally affected in SKvic. RT-qPCR analyses of S. sanguinis biofilm cells further showed upregulation of VicRK targets (spxB, gbpB, and SSA_0094) and other genes for biofilm formation (gtfP and comE) compared to expression in planktonic cells. This study provides evidence that VicRKSs regulates functions crucial for S. sanguinis establishment in biofilms and identifies novel VicRK targets potentially involved in hydrolytic activities of the cell wall required for these functions.
Resumo:
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.
Resumo:
In order to determine the energy needed to artificially dry an agricultural product the latent heat of vaporization of moisture in the product, H, must be known. Generally, the expressions for H reported in the literature are of the form H = h(T)f(M), where h(T) is the latent heat of vaporization of free water, and f(M) is a function of the equilibrium moisture content, M, which is a simplification. In this article, a more general expression for the latent heat of vaporization, namely H = g(M,T), is used to determine H for cowpea, always-green variety. For this purpose, a computer program was developed which automatically fits about 500 functions, with one or two independent variables, imbedded in its library to experimental data. The program uses nonlinear regression, and classifies the best functions according to the least reduced chi-squared. A set of executed statistical tests shows that the generalized expression for H used in this work produces better results of H for cowpea than other equations found in literature.