14 resultados para New Marginal Literature
em Universidad Politécnica de Madrid
Resumo:
The access to medical literature collections such as PubMed, MedScape or Cochrane has been increased notably in the last years by the web-based tools that provide instant access to the information. However, more sophisticated methodologies are needed to exploit efficiently all that information. The lack of advanced search methods in clinical domain produce that even using well-defined questions for a particular disease, clinicians receive too many results. Since no information analysis is applied afterwards, some relevant results which are not presented in the top of the resultant collection could be ignored by the expert causing an important loose of information. In this work we present a new method to improve scientific article search using patient information for query generation. Using federated search strategy, it is able to simultaneously search in different resources and present a unique relevant literature collection. And applying NLP techniques it presents semantically similar publications together, facilitating the identification of relevant information to clinicians. This method aims to be the foundation of a collaborative environment for sharing clinical knowledge related to patients and scientific publications.
Resumo:
Intercultural Approaches to Cities and Spaces in Literature, Film, and New Media: a review of new work by Manzanas and Benito and Lopez-Varela and Net
Resumo:
Specialized search engines such as PubMed, MedScape or Cochrane have increased dramatically the visibility of biomedical scientific results. These web-based tools allow physicians to access scientific papers instantly. However, this decisive improvement had not a proportional impact in clinical practice due to the lack of advanced search methods. Even queries highly specified for a concrete pathology frequently retrieve too many information, with publications related to patients treated by the physician beyond the scope of the results examined. In this work we present a new method to improve scientific article search using patient information. Two pathologies have been used within the project to retrieve relevant literature to patient data and to be integrated with other sources. Promising results suggest the suitability of the approach, highlighting publications dealing with patient features and facilitating literature search to physicians.
Resumo:
Among the classical operators of mathematical physics the Laplacian plays an important role due to the number of different situations that can be modelled by it. Because of this a great effort has been made by mathematicians as well as by engineers to master its properties till the point that nearly everything has been said about them from a qualitative viewpoint. Quantitative results have also been obtained through the use of the new numerical techniques sustained by the computer. Finite element methods and boundary techniques have been successfully applied to engineering problems as can be seen in the technical literature (for instance [ l ] , [2], [3] . Boundary techniques are especially advantageous in those cases in which the main interest is concentrated on what is happening at the boundary. This situation is very usual in potential problems due to the properties of harmonic functions. In this paper we intend to show how a boundary condition different from the classical, but physically sound, is introduced without any violence in the discretization frame of the Boundary Integral Equation Method. The idea will be developed in the context of heat conduction in axisymmetric problems but it is hoped that its extension to other situations is straightforward. After the presentation of the method several examples will show the capabilities of modelling a physical problem.
Resumo:
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results: We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions: CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.
Resumo:
After more than 40 years of life, software evolution should be considered as a mature field. However, despite such a long history, many research questions still remain open, and controversial studies about the validity of the laws of software evolution are common. During the first part of these 40 years the laws themselves evolved to adapt to changes in both the research and the software industry environments. This process of adaption to new paradigms, standards, and practices stopped about 15 years ago, when the laws were revised for the last time. However, most controversial studies have been raised during this latter period. Based on a systematic and comprehensive literature review, in this paper we describe how and when the laws, and the software evolution field, evolved. We also address the current state of affairs about the validity of the laws, how they are perceived by the research community, and the developments and challenges that are likely to occur in the coming years.
Resumo:
Experimental research on imposed deformation is generally conducted on small scale laboratory experiments. The attractiveness of field research lies in the possibility to compare results obtained from full scale structures to theoretical prediction. Unfortunately, measurements obtained from real structures are rarely described in literature. The structural response of integral edifices depends significantly on stiffness changes and constraints. The New Airport Terminal Barajas in Madrid, Spain provides with large integral modules, partially post?tensioned concrete frames, cast monolithically over three floor levels and an overall length of approx. 80 m. The field campaign described in this article explains the instrumentation of one of these frames focusing on the influence of imposed deformations such as creep, shrinkage and temperature. The applied monitoring equipment included embedded strain gages, thermocouples, DEMEC measurements and simple displacement measurements. Data was collected throughout construction and during two years of service. A complete data range of five years is presented and analysed. The results are compared with a simple approach to predict the long?term shortening of this concrete structure. Both analytical and experimental results are discussed.
Resumo:
Along the Apulian Adriatic coast, in a cliff south of Trani, a succession of three units (superimposed on one another) of marine and/or paralic environments has been recognised. The lowest unit I is characterised by calcareous/siliciclastic sands (css), micritic limestones (ml), stromatolitic and characean boundstones (scb), characean calcarenites (cc). The sedimentary environment merges from shallow marine, with low energy and temporary episodes of subaerial exposure, to lagoonal with a few exchanges with the sea. The lagoonal stromatolites (scb subunit) grew during a long period of relative stability of a high sea level in tropical climate. The unit I is truncated at the top by an erosion surface on which the unit II overlies; this consists of a basal pebble lag (bpl), silicicla - stic sands (ss), calcareous sands (cs), characean boundstones (cb), brown paleosol (bp). The sedimentary environment varies from beach to lagoon with salinity variations. Although there are indications of seismic events within the subunits cs, unit II deposition took place in a context of relative stability. The unit II is referable to a sea level highstand. Unit III, trangressive on the preceding, consists of white calcareous sands (wcs), calcareous sands and calcarenites (csc), phytoclastic calcirudite and phytohermal travertine (pcpt), mixed deposits (csl, m, k, c), sands (s) and red/brown paleosols (rbp). The sedimentation of this unit was affected by synsedimentary tectonic, attested by seismites found at several heights. Also the unit III is referable to a sea level highstand. The scientific literature has so far generally attributed to the Tyrrhenian (auct.) the deposits of Trani cliff. As part of this work some datings were performed on 10 samples, using the amino acid racemization method (AAR) applied to ostracod carapaces. Four of these samples have been rejected because they have shown in laboratory recent contamination. The numerical ages indicate that the deposits of the Trani cliff are older than MIS 5. The upper part of the unit I has been dated to 355±85 ka BP, thus allowing to assign the lowest stromatolitic subunit (scb) at the MIS 11 peak and the top of the unit I at the MIS 11-MIS 10 interval. The base of the unit II has been dated to 333±118 ka BP, thus attributing the erosion surface that bounds the units I and II to the MIS 10 lowstand and the lower part of the unit II to MIS 9.3. The upper part of the unit II has been dated to 234±35 ka BP, while three other numerical ages come from unit III: 303±35, 267±51, 247±61 ka BP. At present, the numerical ages cannot distinguish the sedimentation ages of units II and III, which are both related to the MIS 9.3- MIS 7.1 time range. However, the position of the units, superimposed one another, and their respective age, allows us to recognise a subsidence phase between MIS 11 and MIS 7, followed by an uplift phase between the MIS 7 and the present day, which led the deposits in their current position. This tectonic pattern is not in full agreement with what is described in the literature for the Apulian foreland.
Resumo:
Numerous authors have proposed functions to quantify the degree of similarity between two fuzzy numbers using various descriptive parameters, such as the geometric distance, the distance between the centers of gravity or the perimeter. However, these similarity functions have drawback for specific situations. We propose a new similarity measure for generalized trapezoidal fuzzy numbers aimed at overcoming such drawbacks. This new measure accounts for the distance between the centers of gravity and the geometric distance but also incorporates a new term based on the shared area between the fuzzy numbers. The proposed measure is compared against other measures in the literature.
Resumo:
This paper addresses the economic impact assessment of the construction of a new road on the regional distribution of jobs. The paper summarizes different existing model approaches considered to assess economic impacts through a literature review. Afterwards, we present the development of a comprehensive approach for analyzing the interaction of new transport infrastructure and the economic impact through an integrated model. This model has been applied to the construction of the motorway A-40 in Spain (497 Km.) which runs across three regions without passing though Madrid City. This may in turn lead to the relocation of labor and capital due to the improvement of accessibility of markets or inputs. The result suggests the existence of direct and indirect effects in other regions derived from the improvement of the transportation infrastructure, and confirms the relevance of road freight transport in some regions. We found that the changes in regional employment are substantial for some regions (increasing or decreasing jobs), but a t the same time negligible in other regions. As a result,the approach provides broad guidance to national governments and other transport-related parties about the impacts of this transport policy.
Resumo:
A formulation of the perturbed two-body problem that relies on a new set of orbital elements is presented. The proposed method represents a generalization of the special perturbation method published by Peláez et al. (Celest Mech Dyn Astron 97(2):131?150,2007) for the case of a perturbing force that is partially or totally derivable from a potential. We accomplish this result by employing a generalized Sundman time transformation in the framework of the projective decomposition, which is a known approach for transforming the two-body problem into a set of linear and regular differential equations of motion. Numerical tests, carried out with examples extensively used in the literature, show the remarkable improvement of the performance of the new method for different kinds of perturbations and eccentricities. In particular, one notable result is that the quadratic dependence of the position error on the time-like argument exhibited by Peláez?s method for near-circular motion under the J2 perturbation is transformed into linear.Moreover, themethod reveals to be competitive with two very popular elementmethods derived from theKustaanheimo-Stiefel and Sperling-Burdet regularizations.
Resumo:
This paper groups recent supply chain management research focused on organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management realize cost-benefit analysis of IT software implementations. The success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. This paper concludes with proposals for future research on unaddressed issues within and among the identified research streams.
Resumo:
Customer Satisfaction Surveys (CSS) have become an important tool for public transport planners, as improvements in the perceived quality of service lead to greater use of public transport and lower traffic pollution. Until now, Intelligent Transportation System (ITS) enhancements in public transport have traditionally included fleet management systems based on Automatic Vehicle Location (AVL) technologies, which can be used to optimize routing and scheduling, and to feed real-time information into passenger information channels. However, surveys of public transport users could also benefit from the new information technologies. As most customers carry their smartphones when traveling, Quick Response (QR) codes open up the possibility of conducting these surveys at a lower cost.This paper contributes to the limited existing literature by developing the analysis of QR codes applied to CSS in public transport and highlighting their importance in reducing the cost of data collection and processing. The added value of this research is that it provides the first assessment of a real case study in Madrid (Spain) using QR codes for this purpose. This pilot experience was part of a research project analyzing bus service quality in the same case study, so the QR code survey (155 valid questionnaires) was validated using a conventional face-to-face survey (520 valid questionnaires). The results show clearly that, after overcoming a few teething troubles, this QR code application will ultimately provide transport management with a useful tool to reduce survey costs
Resumo:
El principal objetivo de esta tesis es el desarrollo de métodos de síntesis de diagramas de radiación de agrupaciones de antenas, en donde se realiza una caracterización electromagnética rigurosa de los elementos radiantes y de los acoplos mutuos existentes. Esta caracterización no se realiza habitualmente en la gran mayoría de métodos de síntesis encontrados en la literatura, debido fundamentalmente a dos razones. Por un lado, se considera que el diagrama de radiación de un array de antenas se puede aproximar con el factor de array que únicamente tiene en cuenta la posición de los elementos y las excitaciones aplicadas a los mismos. Sin embargo, como se mostrará en esta tesis, en múltiples ocasiones un riguroso análisis de los elementos radiantes y del acoplo mutuo entre ellos es importante ya que los resultados obtenidos pueden ser notablemente diferentes. Por otro lado, no es sencillo combinar un método de análisis electromagnético con un proceso de síntesis de diagramas de radiación. Los métodos de análisis de agrupaciones de antenas suelen ser costosos computacionalmente, ya que son estructuras grandes en términos de longitudes de onda. Generalmente, un diseño de un problema electromagnético suele comprender varios análisis de la estructura, dependiendo de las variaciones de las características, lo que hace este proceso muy costoso. Dos métodos se utilizan en esta tesis para el análisis de los arrays acoplados. Ambos están basados en el método de los elementos finitos, la descomposición de dominio y el análisis modal para analizar la estructura radiante y han sido desarrollados en el grupo de investigación donde se engloba esta tesis. El primero de ellos es una técnica de análisis de arrays finitos basado en la aproximación de array infinito. Su uso es indicado para arrays planos de grandes dimensiones con elementos equiespaciados. El segundo caracteriza el array y el acoplo mutuo entre elementos a partir de una expansión en modos esféricos del campo radiado por cada uno de los elementos. Este método calcula los acoplos entre los diferentes elementos del array usando las propiedades de traslación y rotación de los modos esféricos. Es capaz de analizar agrupaciones de elementos distribuidos de forma arbitraria. Ambas técnicas utilizan una formulación matricial que caracteriza de forma rigurosa el campo radiado por el array. Esto las hace muy apropiadas para su posterior uso en una herramienta de diseño, como los métodos de síntesis desarrollados en esta tesis. Los resultados obtenidos por estas técnicas de síntesis, que incluyen métodos rigurosos de análisis, son consecuentemente más precisos. La síntesis de arrays consiste en modificar uno o varios parámetros de las agrupaciones de antenas buscando unas determinadas especificaciones de las características de radiación. Los parámetros utilizados como variables de optimización pueden ser varios. Los más utilizados son las excitaciones aplicadas a los elementos, pero también es posible modificar otros parámetros de diseño como son las posiciones de los elementos o las rotaciones de estos. Los objetivos de las síntesis pueden ser dirigir el haz o haces en una determinada dirección o conformar el haz con formas arbitrarias. Además, es posible minimizar el nivel de los lóbulos secundarios o del rizado en las regiones deseadas, imponer nulos que evitan posibles interferencias o reducir el nivel de la componente contrapolar. El método para el análisis de arrays finitos basado en la aproximación de array infinito considera un array finito como un array infinito con un número finito de elementos excitados. Los elementos no excitados están físicamente presentes y pueden presentar tres diferentes terminaciones, corto-circuito, circuito abierto y adaptados. Cada una de estas terminaciones simulará mejor el entorno real en el que el array se encuentre. Este método de análisis se integra en la tesis con dos métodos diferentes de síntesis de diagramas de radiación. En el primero de ellos se presenta un método basado en programación lineal en donde es posible dirigir el haz o haces, en la dirección deseada, además de ejercer un control sobre los lóbulos secundarios o imponer nulos. Este método es muy eficiente y obtiene soluciones óptimas. El mismo método de análisis es también aplicado a un método de conformación de haz, en donde un problema originalmente no convexo (y de difícil solución) es transformado en un problema convexo imponiendo restricciones de simetría, resolviendo de este modo eficientemente un problema complejo. Con este método es posible diseñar diagramas de radiación con haces de forma arbitraria, ejerciendo un control en el rizado del lóbulo principal, así como en el nivel de los lóbulos secundarios. El método de análisis de arrays basado en la expansión en modos esféricos se integra en la tesis con tres técnicas de síntesis de diagramas de radiación. Se propone inicialmente una síntesis de conformación del haz basado en el método de la recuperación de fase resuelta de forma iterativa mediante métodos convexos, en donde relajando las restricciones del problema original se consiguen unas soluciones cercanas a las óptimas de manera eficiente. Dos métodos de síntesis se han propuesto, donde las variables de optimización son las posiciones y las rotaciones de los elementos respectivamente. Se define una función de coste basada en la intensidad de radiación, la cual es minimizada de forma iterativa con el método del gradiente. Ambos métodos reducen el nivel de los lóbulos secundarios minimizando una función de coste. El gradiente de la función de coste es obtenido en términos de la variable de optimización en cada método. Esta función de coste está formada por la expresión rigurosa de la intensidad de radiación y por una función de peso definida por el usuario para imponer prioridades sobre las diferentes regiones de radiación, si así se desea. Por último, se presenta un método en el cual, mediante técnicas de programación entera, se buscan las fases discretas que generan un diagrama de radiación lo más cercano posible al deseado. Con este método se obtienen diseños que minimizan el coste de fabricación. En cada uno de las diferentes técnicas propuestas en la tesis, se presentan resultados con elementos reales que muestran las capacidades y posibilidades que los métodos ofrecen. Se comparan los resultados con otros métodos disponibles en la literatura. Se muestra la importancia de tener en cuenta los diagramas de los elementos reales y los acoplos mutuos en el proceso de síntesis y se comparan los resultados obtenidos con herramientas de software comerciales. ABSTRACT The main objective of this thesis is the development of optimization methods for the radiation pattern synthesis of array antennas in which a rigorous electromagnetic characterization of the radiators and the mutual coupling between them is performed. The electromagnetic characterization is usually overlooked in most of the available synthesis methods in the literature, this is mainly due to two reasons. On the one hand, it is argued that the radiation pattern of an array is mainly influenced by the array factor and that the mutual coupling plays a minor role. As it is shown in this thesis, the mutual coupling and the rigorous characterization of the array antenna influences significantly in the array performance and its computation leads to differences in the results obtained. On the other hand, it is difficult to introduce an analysis procedure into a synthesis technique. The analysis of array antennas is generally expensive computationally as the structure to analyze is large in terms of wavelengths. A synthesis method requires to carry out a large number of analysis, this makes the synthesis problem very expensive computationally or intractable in some cases. Two methods have been used in this thesis for the analysis of coupled antenna arrays, both of them have been developed in the research group in which this thesis is involved. They are based on the finite element method (FEM), the domain decomposition and the modal analysis. The first one obtains a finite array characterization with the results obtained from the infinite array approach. It is specially indicated for the analysis of large arrays with equispaced elements. The second one characterizes the array elements and the mutual coupling between them with a spherical wave expansion of the radiated field by each element. The mutual coupling is computed using the properties of translation and rotation of spherical waves. This method is able to analyze arrays with elements placed on an arbitrary distribution. Both techniques provide a matrix formulation that makes them very suitable for being integrated in synthesis techniques, the results obtained from these synthesis methods will be very accurate. The array synthesis stands for the modification of one or several array parameters looking for some desired specifications of the radiation pattern. The array parameters used as optimization variables are usually the excitation weights applied to the array elements, but some other array characteristics can be used as well, such as the array elements positions or rotations. The desired specifications may be to steer the beam towards any specific direction or to generate shaped beams with arbitrary geometry. Further characteristics can be handled as well, such as minimize the side lobe level in some other radiating regions, to minimize the ripple of the shaped beam, to take control over the cross-polar component or to impose nulls on the radiation pattern to avoid possible interferences from specific directions. The analysis method based on the infinite array approach considers an infinite array with a finite number of excited elements. The infinite non-excited elements are physically present and may have three different terminations, short-circuit, open circuit and match terminated. Each of this terminations is a better simulation for the real environment of the array. This method is used in this thesis for the development of two synthesis methods. In the first one, a multi-objective radiation pattern synthesis is presented, in which it is possible to steer the beam or beams in desired directions, minimizing the side lobe level and with the possibility of imposing nulls in the radiation pattern. This method is very efficient and obtains optimal solutions as it is based on convex programming. The same analysis method is used in a shaped beam technique in which an originally non-convex problem is transformed into a convex one applying symmetry restrictions, thus solving a complex problem in an efficient way. This method allows the synthesis of shaped beam radiation patterns controlling the ripple in the mainlobe and the side lobe level. The analysis method based on the spherical wave expansion is applied for different synthesis techniques of the radiation pattern of coupled arrays. A shaped beam synthesis is presented, in which a convex formulation is proposed based on the phase retrieval method. In this technique, an originally non-convex problem is solved using a relaxation and solving a convex problems iteratively. Two methods are proposed based on the gradient method. A cost function is defined involving the radiation intensity of the coupled array and a weighting function that provides more degrees of freedom to the designer. The gradient of the cost function is computed with respect to the positions in one of them and the rotations of the elements in the second one. The elements are moved or rotated iteratively following the results of the gradient. A highly non-convex problem is solved very efficiently, obtaining very good results that are dependent on the starting point. Finally, an optimization method is presented where discrete digital phases are synthesized providing a radiation pattern as close as possible to the desired one. The problem is solved using linear integer programming procedures obtaining array designs that greatly reduce the fabrication costs. Results are provided for every method showing the capabilities that the above mentioned methods offer. The results obtained are compared with available methods in the literature. The importance of introducing a rigorous analysis into the synthesis method is emphasized and the results obtained are compared with a commercial software, showing good agreement.