842 resultados para Mathematical operators
Resumo:
The questions studied in this thesis are centered around the moment operators of a quantum observable, the latter being represented by a normalized positive operator measure. The moment operators of an observable are physically relevant, in the sense that these operators give, as averages, the moments of the outcome statistics for the measurement of the observable. The main questions under consideration in this work arise from the fact that, unlike a projection valued observable of the von Neumann formulation, a general positive operator measure cannot be characterized by its first moment operator. The possibility of characterizing certain observables by also involving higher moment operators is investigated and utilized in three different cases: a characterization of projection valued measures among all the observables is given, a quantization scheme for unbounded classical variables using translation covariant phase space operator measures is presented, and, finally, a mathematically rigorous description is obtained for the measurements of rotated quadratures and phase space observables via the high amplitude limit in the balanced homodyne and eight-port homodyne detectors, respectively. In addition, the structure of the covariant phase space operator measures, which is essential for the above quantization, is analyzed in detail in the context of a (not necessarily unimodular) locally compact group as the phase space.
Resumo:
Tämä diplomityö on tehty Lappeenrannassa Telecom Business Research Centerin 5T-projektiin liittyen. Työssä tutkitaan matkaviestinnän lisäarvopalveluiden liiketoimintakonsepteja operaattoreiden näkökulmasta. Lisäarvopalvelut laajentavat operaattoreiden palveluvalikoimaa. Niiden osuuden telekommunikaatioalan yritysten ja erityisesti operaattoreiden tuotoista on ennustettu kasvavan huomattavasti. Työn tärkeimpänä tavoitteena on tuoda uusia näkökulmia ja lisätä ymmärrystä lisäarvopalveluiden liiketoimintakonseptin rakentamisprosessista. Tätä tietämystä käytetään edesauttamaan työn empiirisessä osuudessa tutkitun Content Gateway -tuotteen liiketoimintaa. Tarjoamalla nopean liitynnän ja laskutuskanavan ulkopuolisten palveluntarjoajien ja operaattorin välille tämä tuote mahdollistaa operaattorille ja palveluntarjoajille lisäarvopalveluiden liiketoiminnan käynnistämisen. Lisäarvopalveluiden arvonluontiprosessi vaatii lukuisia yhteistyötä tekeviä osapuolia, joiden yhteistoiminta on dynaamista ja tiedonvälitys avointa, interaktiivista ja nopeaa. Arvonluontiin liittyy myös monia konvergoituvia kehityssuuntia. Perinteinen arvoketjuajattelu on riittämätön uuteen, verkottuneeseen toimintaympäristöön ja sen tilalle on tullut modernimpi arvoverkostomalli. Arvoverkosto luo kilpailuetunsa muita verkostoja vastaan jakamalla resurssit ja kompetenssit optimaalisesti ja liittämällä strategisen ja operationaalisen johtamisen kulttuurit toisiinsa. Tässä työssä verrataan arvoverkoston teoreettisia tavoitteita kahteen lisäarvopalveluiden liiketoimintakonseptiin. Näistä ensimmäinen, i-mode –niminen konsepti on valittu vertailuun edistyksellisyytensä ja tulevaa kehitystä ennakoivien ominaispiirteidensä vuoksi. Toinen esimerkkikonsepti on rakennettu edellä mainitun Content Gateway -tuotteen ympärille. Tutkimus sisältää mm. liikekumppaneiden hankinnan, ansaintalogiikoiden ja verkostojen johtamisen analysoinnin. Työn tuloksena on saatu ohjeita siihen, miten operaattori voi rakentaa tällaista konseptia ja mitä seikkoja tulee ottaa huomioon erityisesti sanomapalveluihin liittyvässä liiketoiminnassa.
Resumo:
Suomessa sähkönjakeluverkkoyhtiöt toimivat verkkovastuualueillaan yksinoikeudella. Verkkovastuualuiden ominaispiirteet voivat olla hyvin erilaiset. Energiamarkkinavirasto valvoo sähkömarkkinalainsäädännön noudattamista jakeluverkkotoiminnassa. Jakeluverkonhaltijat on velvoitettu Energiamarkkinaviraston valvontamallin kautta määrittämään tiettyjen rajoitusten mukaisesti verkkokomponenteillensa sopivimmat teknistaloudelliset pitoajat. Nämä pitoajat vaikuttavat varsinkin verkkoyhtiön tuottomahdollisuuksiin ja asiakkaiden siirtohintoihin. Lisäksi huomioon on otettava jaettavan sähkön laatu, verkon käyttövarmuus sekä vaikutukset ympäristöön ja turvallisuuteen. Pitoaikojen matemaattinen mallintaminen on usein monimutkaista. Teknistaloudellinen pitoaika valitaankin monesti kokemuksen ja harkinnan perusteella. Tärkeimmät reunaehdot jakeluverkkokomponenttien teknistaloudellisten pitoaikojen valinnalle muodostavat verkkovastuualueen sähkönkulutuksen kasvun sekä infrastruktuurin muutoksen nopeudet. Hitaan muutoksen alueilla verkkokomponenttien teknistaloudelliset pitoajat lähenevät teknisiä pitoaikoja, joihin vaikuttavat voimakkaasti verkkovastuualueen maantieteelliset ja ilmastolliset ominaispiirteet. Yhtiöittäin vaihtelevat verkon rakennus- ja ylläpitomenetelmät tulee myös huomioida. Tässä diplomityössä keskitytään pääosin sähkönjakeluverkon komponenttien teknistaloudelliseen pitoaikaan verkon ja verkkovastuualueen ominaispiirteiden kautta. Aluksi määritellään jakeluverkon pitoaika usealla eri tavalla, sekä selvitetään pitoajan merkitystä nykytilanteessa. Lisäksi työn alkuosassa esitellään Energiamarkkinaviraston vuoden 2005 alusta käyttöönotettu jakeluverkkotoiminnan hinnoittelun kohtuullisuuden valvontamalli ja käydään läpi teknistaloudellisen pitoajan merkitys siinä. Sen jälkeen tarkastellaan jakeluverkkokomponenttien ja niiden osien tekniseen pitoaikaan vaikuttavia tekijöitä. Erityisesti puupylväisiin ja niihin liittyviin ajankohtaisiin asioihin kiinnitetään huomiota, koska puupylväät määräävät monesti koko ilmajohtorakenteen uusimisajankohdan. Lisäksi suolakyllästeiselle puupylväälle esitetään yleinen rappeutumismalli ja jakelumuuntajan rappeutumistapahtumaa tutkitaan. Lopuksi tarkastellaan Graninge Kainuu Oy:tä jakeluverkonhaltijana sekä määritetään sen verkkovastuualueelle ominaisia komponenttien teknisiä ja teknistaloudellisia pitoaikoja haastattelujen, tuoreimpien lähteiden, tutkimustulosten, vertailun ja harkinnan avulla.
Resumo:
In this paper we show how a nonlinear preprocessing of speech signal -with high noise- based on morphological filters improves the performance of robust algorithms for pitch tracking (RAPT). This result happens for a very simple morphological filter. More sophisticated ones could even improve such results. Mathematical morphology is widely used in image processing and has a great amount of applications. Almost all its formulations derived in the two-dimensional framework are easily reformulated to be adapted to one-dimensional context
Resumo:
Ordered weighted averaging (OWA) operators and their extensions are powerful tools used in numerous decision-making problems. This class of operator belongs to a more general family of aggregation operators, understood as discrete Choquet integrals. Aggregation operators are usually characterized by indicators. In this article four indicators usually associated with the OWA operator are extended to discrete Choquet integrals: namely, the degree of balance, the divergence, the variance indicator and Renyi entropies. All of these indicators are considered from a local and a global perspective. Linearity of indicators for linear combinations of capacities is investigated and, to illustrate the application of results, indicators of the probabilistic ordered weighted averaging -POWA- operator are derived. Finally, an example is provided to show the application to a specific context.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Peer-reviewed
Resumo:
The role of transport in the economy is twofold. As a sector of economic activity it contributes to a share of national income. On the other hand, improvements in transport infrastructure create room for accelerated economic growth. As a means to support railways as a safe and environmentally friendly transportation mode, the EU legislation has required the opening of domestic railway freight for competition from beginning of year 2007. The importance of railways as a mode of transport has been great in Finland, as a larger share of freight has been carried on rails than in Europe on average. In this thesis it is claimed that the efficiency of goods transport can be enhanced by service specific investments. Furthermore, it is stressed that simulation can and should be used to evaluate the cost-efficiency of transport systems on operational level, as well as to assess transportation infrastructure investments. In all the studied cases notable efficiency improvements were found. For example in distribution, home delivery of groceries can be almost twice as cost efficient as the current practice of visiting the store. The majority of the cases concentrated on railway freight. In timber transportation, the item with the largest annual transport volume in domestic railway freight in Finland, the transportation cost could be reduced most substantially. Also in international timber procurement, the utilization of railway wagons could be improved by combining complementary flows. The efficiency improvements also have positive environmental effects; a large part of road transit could be moved to rails annually. If impacts of freight transport are included in cost-benefit analysis of railway investments, up to 50 % increase in the net benefits of the evaluated alternatives can be experienced, avoiding a possible inbuilt bias in the assessment framework, and thus increasing the efficiency of national investments in transportation infrastructure. Transportation systems are a typical example of complex real world systems that cannot be analysed realistically by analytical methods, whereas simulation allows inclusion of dynamics and the level of detail required. Regarding simulation as a viable tool for assessing the efficiency of transportation systems finds support also in the international survey conducted for railway freight operators; operators use operations research methods widely for planning purposes, while simulation is applied only by the larger operators.
Resumo:
Globalization has increased transport aggregates’ demand. Whilst transport volumes increase, ecological values’im portance has sharpened: carbon footprint has become a measure known world widely. European Union together with other communities emphasizes friendliness to the environment: same trend has extended to transports. As a potential substitute for road transport is noted railway transport, which decreases the congestions and lowers the emission levels. Railway freight market was liberalized in the European Union 2007, which enabled new operators to enter the markets. This research had two main objectives. Firstly, it examined the main market entry strategies utilized and the barriers to entry confronted by the operators who entered the markets after the liberalization. Secondly, the aim was to find ways the governmental organization could enhance its service towards potential railway freight operators. Research is a qualitative case study, utilizing descriptive analytical research method with a normative shade. Empirical data was gathered by interviewing Swedish and Polish railway freight operators by using a semi-structured theme-interview. This research provided novel information by using first-hand data; topic has been researched previously by utilizing second-hand data and literature analyses. Based on this research, rolling stock acquisition, needed investments and bureaucracy generate the main barriers to entry. The research results show that the mostly utilized market entry strategies are start-up and vertical integration. The governmental organization could enhance the market entry process by organizing courses, paying extra attention on flexibility, internal know-how and educating the staff.
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.
Resumo:
In this work, a new mathematical equation correction approach for overcoming spectral and transport interferences was proposed. The proposal was applied to eliminate spectral interference caused by PO molecules at the 217.0005 nm Pb line, and the transport interference caused by variations in phosphoric acid concentrations. Correction may be necessary at 217.0005 nm to account for the contribution of PO, since Atotal217.0005 nm = A Pb217.0005 nm + A PO217.0005 nm. This may be easily done by measuring other PO wavelengths (e.g. 217.0458 nm) and calculating the relative contribution of PO absorbance (A PO) to the total absorbance (Atotal) at 217.0005 nm: A Pb217.0005 nm = Atotal217.0005 nm - A PO217.0005 nm = Atotal217.0005 nm - k (A PO217.0458 nm). The correction factor k is calculated from slopes of calibration curves built up for phosphorous (P) standard solutions measured at 217.0005 and 217.0458 nm, i.e. k = (slope217.0005 nm/slope217.0458 nm). For wavelength integrated absorbance of 3 pixels, sample aspiration rate of 5.0 ml min-1, analytical curves in the 0.1 - 1.0 mg L-1 Pb range with linearity better than 0.9990 were consistently obtained. Calibration curves for P at 217.0005 and 217.0458 nm with linearity better than 0.998 were obtained. Relative standard deviations (RSD) of measurements (n = 12) in the range of 1.4 - 4.3% and 2.0 - 6.0% without and with mathematical equation correction approach were obtained respectively. The limit of detection calculated to analytical line at 217.0005 nm was 10 µg L-1 Pb. Recoveries for Pb spikes were in the 97.5 - 100% and 105 - 230% intervals with and without mathematical equation correction approach, respectively.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
The present study aimed to determine the volumetric shrinkage rate of bean (Phaseolus vulgaris L.) seeds during air-drying under different conditions of air, temperature and relative humidity, and to adjust several mathematical models to the empiric values observed, and select the one that best represents the phenomenon. Six mathematical models were adjusted to the experimental values to represent the phenomenon. It was determined the degree of adjustment of each model from the value of the coefficient of determination, the behavior of the distribution of the residuals, and the magnitude of the average relative and estimated errors. The rate of volumetric shrinkage that occurred in bean seeds during drying is between 25 and 37%. It basically depends on the final moisture content, regardless of the air conditions during drying. The Modified Bala & Woods' model best represented the process.