344 resultados para permissible permutation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The doctrine of fair use allows limited copying of creative works based on the rationale that copyright holders would consent to such uses if bargaining were possible. This paper develops a formal model of fair use in an effort to derive the efficient legal standard for applying the doctrine. The model interprets copies and originals as differentiated products and defines fair use as a threshold separating permissible copying from infringement. The analysis highlights the role of technology in shaping the efficient standard. Discussion of several key cases illustrates the applicability of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypertension is usually defined as having values of systolic blood pressure ≥140 mmHg, diastolic blood pressure ≥90 mmHg. Hypertension is one of the main adverse effects of glucocorticoid on the cardiovascular system. Glucocorticoids are essential hormones, secreted from adrenal glands in circadian fashion. Glucocorticoid's effect on blood pressure is conveyed by the glucocorticoid receptor (NR3C1), an omnipresent nuclear transcription factor. Although polymorphisms in this gene have long been implicated to be a causal factor for cardiovascular diseases such as hypertension, no study has yet thoroughly interrogated the gene's polymorphisms for their effect on blood pressure levels. Therefore, I have first resequenced ∼30 kb of the gene, encompassing all exons, promoter regions, 5'/3' UTRs as well as at least 1.5 kb of the gene's flanking regions from 114 chromosome 5 monosomic cell lines, comprised of three major American ethnic groups—European American, African American and Mexican American. I observed 115 polymorphisms and 14 common molecularly phased haplotypes. A subset of markers was chosen for genotyping study populations of GENOA (Genetic Epidemiology Network of Atherosclerosis; 1022 non-Hispanic whites, 1228 African Americans and 954 Mexican Americans). Since these study populations include sibships, the family-based association test was performed on 4 blood pressure-related quantitative variables—pulse, systolic blood pressure, diastolic blood pressure and mean arterial pressure. Using these analyses, multiple correlated SNPs are significantly protective against high systolic blood pressure in non-Hispanic whites, which includes rsb198, a SNP formerly associated with beneficial body compositions. Haplotype association analysis also supports this finding and all p-values remained significant after permutation tests. I therefore conclude that multiple correlated SNPs on the gene may confer protection against high blood pressure in non-Hispanic whites. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study retrospectively evaluated the spatial and temporal disease patterns associated with influenza-like illness (ILI), positive rapid influenza antigen detection tests (RIDT), and confirmed H1N1 S-OIV cases reported to the Cameron County Department of Health and Human Services between April 26 and May 13, 2009 using the space-time permutation scan statistic software SaTScan in conjunction with geographical information system (GIS) software ArcGIS 9.3. The rate and age-adjusted relative risk of each influenza measure was calculated and a cluster analysis was conducted to determine the geographic regions with statistically higher incidence of disease. A Poisson distribution model was developed to identify the effect that socioeconomic status, population density, and certain population attributes of a census block-group had on that area's frequency of S-OIV confirmed cases over the entire outbreak. Predominant among the spatiotemporal analyses of ILI, RIDT and S-OIV cases in Cameron County is the consistent pattern of a high concentration of cases along the southern border with Mexico. These findings in conjunction with the slight northward space-time shifts of ILI and RIDT cluster centers highlight the southern border as the primary site for public health interventions. Finally, the community-based multiple regression model revealed that three factors—percentage of the population under age 15, average household size, and the number of high school graduates over age 25—were significantly associated with laboratory-confirmed S-OIV in the Lower Rio Grande Valley. Together, these findings underscore the need for community-based surveillance, improve our understanding of the distribution of the burden of influenza within the community, and have implications for vaccination and community outreach initiatives.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inefficiencies during the management of healthcare waste can give rise to undesirable health effects such as transmission of infections and environmental pollution within and beyond the health facilities generating these wastes. Factors such as prevalence of diseases, conflicts, and the efflux of intellectual capacity make low income countries more susceptible to these adverse health effects. The purpose of this systematic review was to describe the effectiveness of interventions geared towards better managing the generation, collection, transport, treatment and disposal of medical waste, as they have been applied in lower and middle income countries.^ Using a systematic search strategy and evaluation of study quality, this study reviewed the literature for published studies on healthcare waste management interventions carried out in developing countries, specifically the low and lower middle income countries from year 2000 to the current year. From an initially identified set of 829 studies, only three studies ultimately met all inclusion, exclusion and high quality criteria. A multi component intervention in Syrian Arab Republic, conducted in 2007 was aimed at improving waste segregation practice in a hospital setting. There was an increased use of segregation boxes and reduced rates of sharps injury among staff as a result of the intervention. Another study, conducted in 2008, trained medical students as monitors of waste segregation practice in an Indian teaching hospital. There was improved practice in wards and laboratories but not in the intensive care units. The third study, performed in 2008 in China, consisted of modification of the components of a medical waste incinerator to improve efficiency and reduce stack emissions. Gaseous pollutants emitted, except polychlorodibenzofurans (PCDF) were below US EPA permissible exposure limits. Heavy metal residues in the fly ash remained unchanged.^ Due to the paucity of well-designed studies, there is insufficient evidence in literature to conclude on the effectiveness of interventions in low income settings. There is suggestive but insufficient evident that multi-component interventions aimed at improving waste segregation through behavior modification, provision of segregation tools and training of monitors are effective in low income settings.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radiomics is the high-throughput extraction and analysis of quantitative image features. For non-small cell lung cancer (NSCLC) patients, radiomics can be applied to standard of care computed tomography (CT) images to improve tumor diagnosis, staging, and response assessment. The first objective of this work was to show that CT image features extracted from pre-treatment NSCLC tumors could be used to predict tumor shrinkage in response to therapy. This is important since tumor shrinkage is an important cancer treatment endpoint that is correlated with probability of disease progression and overall survival. Accurate prediction of tumor shrinkage could also lead to individually customized treatment plans. To accomplish this objective, 64 stage NSCLC patients with similar treatments were all imaged using the same CT scanner and protocol. Quantitative image features were extracted and principal component regression with simulated annealing subset selection was used to predict shrinkage. Cross validation and permutation tests were used to validate the results. The optimal model gave a strong correlation between the observed and predicted shrinkages with . The second objective of this work was to identify sets of NSCLC CT image features that are reproducible, non-redundant, and informative across multiple machines. Feature sets with these qualities are needed for NSCLC radiomics models to be robust to machine variation and spurious correlation. To accomplish this objective, test-retest CT image pairs were obtained from 56 NSCLC patients imaged on three CT machines from two institutions. For each machine, quantitative image features with concordance correlation coefficient values greater than 0.90 were considered reproducible. Multi-machine reproducible feature sets were created by taking the intersection of individual machine reproducible feature sets. Redundant features were removed through hierarchical clustering. The findings showed that image feature reproducibility and redundancy depended on both the CT machine and the CT image type (average cine 4D-CT imaging vs. end-exhale cine 4D-CT imaging vs. helical inspiratory breath-hold 3D CT). For each image type, a set of cross-machine reproducible, non-redundant, and informative image features was identified. Compared to end-exhale 4D-CT and breath-hold 3D-CT, average 4D-CT derived image features showed superior multi-machine reproducibility and are the best candidates for clinical correlation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dataset provides detailed information on the study that was conducted in Lahore's 7 major towns. The sample was taken from 472 tubewells and analyzed for major cations and anions using APHA 2012 techniques as explained herein. Besides, E.coli determination was done to check for microbial contamination. The data includes results from PHREEQC modeling of As(III)/ As(V) species and saturation indices as well as Aquachem's computed hydrochemical water facies. The WHO (2011) and EPA standards included in Aquachem identified the parameters that where in violation. Bicarbonates dominated the groundwater types with 50.21% of the samples exceeding the EPA maximum permissible limit of 250 mg/L in drinking water. Similarly, 30.51% of the samples had TDS values greater than 500 mg/L while 85.38 % of the samples exceed 10 µg/L threshold limit value of arsenic. Also, instances of high magnesium hazard values were observed which requires constant assessment if the groundwater is used for irrigation. Higher than 50% MH values are detrimental to crops which may reduce the expected yields. The membrane filtration technique using m-Endo Agar indicated that 3.59% samples had TNC (too numerous to count) values for E.coli while 5.06% showed values higher than 0 cfu/ 100 ml acceptable value in drinking water. Any traces of E-coli in a groundwater sample indicate recent fecal contamination. Such outcomes signify presence of enteric pathogens. If the groundwater is not properly dosed with disinfectants it may cause harm to human health. It is concluded that more studies are needed and proper groundwater management implement to safeguard the lives of communities that depend solely on groundwater in the city.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this project is to show that the permissible explosive called 20 SR is able to pull out the coal in the normal conditions of blasting in a satisfactory way and to set up the equivalence between the 20 SR and gelatin dynamite (Goma 2 ECO). To achieve this goal some blasting were done, changing the conditions of the blasting and the powder factor for the 20 SR. To analyze the fragmentation base on the analysis of the images of the rock blasted, a commercial software was used. The results from this analysis were compared with the results from the theoretical model for fragmentation created by Kuz – Ram. After all, it was showed that the 20 SR explosive is able to pull out the coal for different coal rock compositions. As the result of this project we can conclude that the 20 SR seems to be able to pull out the coal in normal blasting conditions, using the powder factor as a proportion of the “ballistic mortar” between the two explosives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente proyecto pretende demostrar que el explosivo de seguridad 20 SR es capaz de arrancar el carbón de forma satisfactoria en las condiciones de disparo habituales y establecer la equivalencia práctica de dicho explosivo con una dinamita gelatinosa (Goma 2ECO). Para conseguir este objetivo se realizaron una serie de voladuras, variando las condiciones de disparo y los consumos específicos de la dinamita de seguridad. Se utilizó un software de análisis fotográfico para el estudio de la fragmentación en la pila y también se compararon los resultados obtenidos con el modelo teórico de fragmentación de Kuz – Ram. Los resultados demostraron la capacidad de arranque de la dinamita de seguridad, para diferentes composiciones de carbón. Del estudio parece deducirse que la dinamita de seguridad 20 SR es capaz de arrancar el carbón en condiciones de disparo habituales utilizando un consumo específico proporcional a la relación de la potencia del péndulo balístico de ambos explosivos. ABSTRACT The objective of this project is to show that the permissible explosive called 20 SR is able to pull out the coal in the normal conditions of blasting in a satisfactory way and to set up the equivalence between the 20 SR and gelatin dynamite (Goma 2 ECO). To achieve this goal some blasting were done, changing the conditions of the blasting and the powder factor for the 20 SR. To analyze the fragmentation base on the analysis of the images of the rock blasted, a commercial software was used. The results from this analysis were compared with the results from the theoretical model for fragmentation created by Kuz – Ram. After all, it was showed that the 20 SR explosive is able to pull out the coal for different coal rock compositions. As the result of this project we can conclude that the 20 SR seems to be able to pull out the coal in normal blasting conditions, using the powder factor as a proportion of the “ballistic mortar” between the two explosives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finding the degree-constrained minimum spanning tree (DCMST) of a graph is a widely studied NP-hard problem. One of its most important applications is network design. Here we deal with a new variant of the DCMST problem, which consists of finding not only the degree- but also the role-constrained minimum spanning tree (DRCMST), i.e., we add constraints to restrict the role of the nodes in the tree to root, intermediate or leaf node. Furthermore, we do not limit the number of root nodes to one, thereby, generally, building a forest of DRCMSTs. The modeling of network design problems can benefit from the possibility of generating more than one tree and determining the role of the nodes in the network. We propose a novel permutation-based representation to encode these forests. In this new representation, one permutation simultaneously encodes all the trees to be built. We simulate a wide variety of DRCMST problems which we optimize using eight different evolutionary computation algorithms encoding individuals of the population using the proposed representation. The algorithms we use are: estimation of distribution algorithm, generational genetic algorithm, steady-state genetic algorithm, covariance matrix adaptation evolution strategy, differential evolution, elitist evolution strategy, non-elitist evolution strategy and particle swarm optimization. The best results are for the estimation of distribution algorithms and both types of genetic algorithms, although the genetic algorithms are significantly faster.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Encontrar el árbol de expansión mínimo con restricción de grado de un grafo (DCMST por sus siglas en inglés) es un problema NP-complejo ampliamente estudiado. Una de sus aplicaciones más importantes es el dise~no de redes. Aquí nosotros tratamos una nueva variante del problema DCMST, que consiste en encontrar el árbol de expansión mínimo no solo con restricciones de grado, sino también con restricciones de rol (DRCMST), es decir, a~nadimos restricciones para restringir el rol que los nodos tienen en el árbol. Estos roles pueden ser nodo raíz, nodo intermedio o nodo hoja. Por otra parte, no limitamos el número de nodos raíz a uno, por lo que, en general, construiremos bosques de DRCMSTs. El modelado en los problemas de dise~no de redes puede beneficiarse de la posibilidad de generar más de un árbol y determinar el rol de los nodos en la red. Proponemos una nueva representación basada en permutaciones para codificar los bosques de DRCMSTs. En esta nueva representación, una permutación codifica simultáneamente todos los árboles que se construirán. Nosotros simulamos una amplia variedad de problemas DRCMST que optimizamos utilizando ocho algoritmos de computación evolutiva diferentes que codifican los individuos de la población utilizando la representación propuesta. Los algoritmos que utilizamos son: algoritmo de estimación de distribuciones (EDA), algoritmo genético generacional (gGA), algoritmo genético de estado estacionario (ssGA), estrategia evolutiva basada en la matriz de covarianzas (CMAES), evolución diferencial (DE), estrategia evolutiva elitista (ElitistES), estrategia evolutiva no elitista (NonElitistES) y optimización por enjambre de partículas (PSO). Los mejores resultados fueron para el algoritmo de estimación de distribuciones utilizado y ambos tipos de algoritmos genéticos, aunque los algoritmos genéticos fueron significativamente más rápidos.---ABSTRACT---Finding the degree-constrained minimum spanning tree (DCMST) of a graph is a widely studied NP-hard problem. One of its most important applications is network design. Here we deal with a new variant of the DCMST problem, which consists of finding not only the degree- but also the role-constrained minimum spanning tree (DRCMST), i.e., we add constraints to restrict the role of the nodes in the tree to root, intermediate or leaf node. Furthermore, we do not limit the number of root nodes to one, thereby, generally, building a forest of DRCMSTs. The modeling of network design problems can benefit from the possibility of generating more than one tree and determining the role of the nodes in the network. We propose a novel permutation-based representation to encode the forest of DRCMSTs. In this new representation, one permutation simultaneously encodes all the trees to be built. We simulate a wide variety of DRCMST problems which we optimize using eight diferent evolutionary computation algorithms encoding individuals of the population using the proposed representation. The algorithms we use are: estimation of distribution algorithm (EDA), generational genetic algorithm (gGA), steady-state genetic algorithm (ssGA), covariance matrix adaptation evolution strategy (CMAES), diferential evolution (DE), elitist evolution strategy (ElististES), non-elitist evolution strategy (NonElististES) and particle swarm optimization (PSO). The best results are for the estimation of distribution algorithm and both types of genetic algorithms, although the genetic algorithms are significantly faster. iv

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El rebase se define como el transporte de una cantidad importante de agua sobre la coronación de una estructura. Por tanto, es el fenómeno que, en general, determina la cota de coronación del dique dependiendo de la cantidad aceptable del mismo, a la vista de condicionantes funcionales y estructurales del dique. En general, la cantidad de rebase que puede tolerar un dique de abrigo desde el punto de vista de su integridad estructural es muy superior a la cantidad permisible desde el punto de vista de su funcionalidad. Por otro lado, el diseño de un dique con una probabilidad de rebase demasiado baja o nula conduciría a diseños incompatibles con consideraciones de otro tipo, como son las estéticas o las económicas. Existen distintas formas de estudiar el rebase producido por el oleaje sobre los espaldones de las obras marítimas. Las más habituales son los ensayos en modelo físico y las formulaciones empíricas o semi-empíricas. Las menos habituales son la instrumentación en prototipo, las redes neuronales y los modelos numéricos. Los ensayos en modelo físico son la herramienta más precisa y fiable para el estudio específico de cada caso, debido a la complejidad del proceso de rebase, con multitud de fenómenos físicos y parámetros involucrados. Los modelos físicos permiten conocer el comportamiento hidráulico y estructural del dique, identificando posibles fallos en el proyecto antes de su ejecución, evaluando diversas alternativas y todo esto con el consiguiente ahorro en costes de construcción mediante la aportación de mejoras al diseño inicial de la estructura. Sin embargo, presentan algunos inconvenientes derivados de los márgenes de error asociados a los ”efectos de escala y de modelo”. Las formulaciones empíricas o semi-empíricas presentan el inconveniente de que su uso está limitado por la aplicabilidad de las fórmulas, ya que éstas sólo son válidas para una casuística de condiciones ambientales y tipologías estructurales limitadas al rango de lo reproducido en los ensayos. El objetivo de la presente Tesis Doctoral es el contrate de las formulaciones desarrolladas por diferentes autores en materia de rebase en distintas tipologías de diques de abrigo. Para ello, se ha realizado en primer lugar la recopilación y el análisis de las formulaciones existentes para estimar la tasa de rebase sobre diques en talud y verticales. Posteriormente, se llevó a cabo el contraste de dichas formulaciones con los resultados obtenidos en una serie de ensayos realizados en el Centro de Estudios de Puertos y Costas. Para finalizar, se aplicó a los ensayos de diques en talud seleccionados la herramienta neuronal NN-OVERTOPPING2, desarrollada en el proyecto europeo de rebases CLASH (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping”), contrastando de este modo la tasa de rebase obtenida en los ensayos con este otro método basado en la teoría de las redes neuronales. Posteriormente, se analizó la influencia del viento en el rebase. Para ello se han realizado una serie de ensayos en modelo físico a escala reducida, generando oleaje con y sin viento, sobre la sección vertical del Dique de Levante de Málaga. Finalmente, se presenta el análisis crítico del contraste de cada una de las formulaciones aplicadas a los ensayos seleccionados, que conduce a las conclusiones obtenidas en la presente Tesis Doctoral. Overtopping is defined as the volume of water surpassing the crest of a breakwater and reaching the sheltered area. This phenomenon determines the breakwater’s crest level, depending on the volume of water admissible at the rear because of the sheltered area’s functional and structural conditioning factors. The ways to assess overtopping processes range from those deemed to be most traditional, such as semi-empirical or empirical type equations and physical, reduced scale model tests, to others less usual such as the instrumentation of actual breakwaters (prototypes), artificial neural networks and numerical models. Determining overtopping in reduced scale physical model tests is simple but the values obtained are affected to a greater or lesser degree by the effects of a scale model-prototype such that it can only be considered as an approximation to what actually happens. Nevertheless, physical models are considered to be highly useful for estimating damage that may occur in the area sheltered by the breakwater. Therefore, although physical models present certain problems fundamentally deriving from scale effects, they are still the most accurate, reliable tool for the specific study of each case, especially when large sized models are adopted and wind is generated Empirical expressions obtained from laboratory tests have been developed for calculating the overtopping rate and, therefore, the formulas obtained obviously depend not only on environmental conditions – wave height, wave period and water level – but also on the model’s characteristics and are only applicable in a range of validity of the tests performed in each case. The purpose of this Thesis is to make a comparative analysis of methods for calculating overtopping rates developed by different authors for harbour breakwater overtopping. First, existing equations were compiled and analysed in order to estimate the overtopping rate on sloping and vertical breakwaters. These equations were then compared with the results obtained in a number of tests performed in the Centre for Port and Coastal Studies of the CEDEX. In addition, a neural network model developed in the European CLASH Project (“Crest Level Assessment of Coastal Structures by Full Scale Monitoring, Neural Network Prediction and Hazard Analysis on Permissible Wave Overtopping“) was also tested. Finally, the wind effects on overtopping are evaluated using tests performed with and without wind in the physical model of the Levante Breakwater (Málaga).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En la actualidad, cualquier compañía de telecomunicaciones que posea su propia red debe afrontar el problema del mantenimiento de la misma. Ofrecer un mínimo de calidad de servicio a sus clientes debe ser uno de sus objetivos principales. Esta calidad debe mantenerse aunque ocurran incidencias en la red. El presente trabajo pretende resolver el problema de priorizar el orden en que se restauran los cables, caminos y circuitos, dañados por una incidencia, dentro de una red troncal de transporte perteneciente a una operadora de telecomunicaciones. Tras un planteamiento detallado del problema y de todos los factores que influirán en la toma de decisión, en primer lugar se acomete una solución basada en Métodos Multicriterio Discretos, concretamente con el uso de ELECTRE I y AHP. A continuación se realiza una propuesta de solución basada en Redes Neuronales (con dos aproximaciones diferentes al problema). Por último se utiliza un método basado en la Optimización por Enjambre de Partículas (PSO), adaptado a un problema de permutación de enteros (ordenación), y con una forma particular de evaluar la mejor posición global del enjambre. Complementariamente se realiza una exposición de lo que es una empresa Operadora de telecomunicaciones, de sus departamentos y procesos internos, de los servicios que ofrece, de las redes sobre las que se soportan, y de los puntos clave a tener en cuenta en la implementación de sus sistemas informáticos de gestión integral. ABSTRACT: Nowadays, any telecommunications company that owns its own network must face the problem of maintaining it (service assurance). Provide a minimum quality of service to its customers must be one of its main objectives. This quality should be maintained although any incidents happen to occur in the network. This thesis aims to solve the problem of prioritizing the order in which the damaged cables, trails, path and circuits, within a backbone transport network, should be restored. This work begins with a detailed explanation about network maintenance issues and all the factors that influence decision-making problem. First of all, one solution based on Discrete Multicriteria methods is tried (ELECTRE I and AHP algorithms are used). Also, a solution based on neural networks (with two different approaches to the problem) is analyzed. Finally, this thesis proposes an algorithm based on Particle Swarm Optimization (PSO), adapted to a problem of integers permutation, and a particular way of evaluating the best overall position of the swarm method. In addition, there is included an exhibition about telecommunications companies, its departments, internal processes, services offered to clients, physical networks, and key points to consider when implementing its integrated management systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the Morton-Franks-Williams inequality for closures of simple braids (also known as positive permutation braids). This allows to prove, in a simple way, that the set of simple braids is an orthonormal basis for the inner product of the Hecke algebra of the braid group defined by Kálmán, who first obtained this result by using an interesting connection with Contact Topology. We also introduce a new technique to study the Homflypt polynomial for closures of positive braids, namely resolution trees whose leaves are simple braids. In terms of these simple resolution trees, we characterize closed positive braids for which the Morton-Franks-Williams inequality is strict. In particular, we determine explicitly the positive braid words on three strands whose closures have braid index three.