958 resultados para Dunkl Transform


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La collaboration CLIC (Compact LInear Collider, collisionneur linéaire compact) étudie la possibilité de réaliser un collisionneur électron-positon linéaire à haute énergie (3 TeV dans le centre de masse) et haute luminosité (1034 cm-2s-1), pour la recherche en physique des particules. Le projet CLIC se fonde sur l'utilisation de cavités accélératrices à haute fréquence (30 GHz). La puissance nécessaire à ces cavités est fournie par un faisceau d'électrons de basse énergie et de haute intensité, appelé faisceau de puissance, circulant parallèlement à l'accélérateur linéaire principal (procédé appelé « Accélération à Double Faisceau »). Dans ce schéma, un des principaux défis est la réalisation du faisceau de puissance, qui est d'abord généré dans un complexe accélérateur à basse fréquence, puis transformé pour obtenir une structure temporelle à haute fréquence nécessaire à l'alimentation des cavités accélératrices de l'accélérateur linéaire principal. La structure temporelle à haute fréquence des paquets d'électrons est obtenue par le procédé de multiplication de fréquence, dont la manipulation principale consiste à faire circuler le faisceau d'électrons dans un anneau isochrone en utilisant des déflecteurs radio-fréquence (déflecteurs RF) pour injecter et combiner les paquets d'électrons. Cependant, ce type de manipulation n'a jamais été réalisé auparavant et la première phase de la troisième installation de test pour CLIC (CLIC Test Facility 3 ou CTF3) a pour but la démonstration à faible charge du procédé de multiplication de fréquence par injection RF dans un anneau isochrone. Cette expérience, qui a été réalisée avec succès au CERN au cours de l'année 2002 en utilisant une version modifiée du pré-injecteur du grand collisionneur électron-positon LEP (Large Electron Positron), est le sujet central de ce rapport. L'expérience de combinaison des paquets d'électrons consiste à accélérer cinq impulsions dont les paquets d'électrons sont espacés de 10 cm, puis à les combiner dans un anneau isochrone pour obtenir une seule impulsion dont les paquets d'électrons sont espacés de 2 cm, multipliant ainsi la fréquence des paquets d'électrons, ainsi que la charge par impulsion, par cinq. Cette combinaison est réalisée au moyen de structures RF résonnantes sur un mode déflecteur, qui créent dans l'anneau une déformation locale et dépendante du temps de l'orbite du faisceau. Ce mécanisme impose plusieurs contraintes de dynamique de faisceau comme l'isochronicité, ainsi que des tolérances spécifiques sur les paquets d'électrons, qui sont définies dans ce rapport. Les études pour la conception de la Phase Préliminaire du CTF3 sont détaillées, en particulier le nouveau procédé d'injection avec les déflecteurs RF. Les tests de haute puissance réalisés sur ces cavités déflectrices avant leur installation dans l'anneau sont également décrits. L'activité de mise en fonctionnement de l'expérience est présentée en comparant les mesures faites avec le faisceau aux simulations et calculs théoriques. Finalement, les expériences de multiplication de fréquence des paquets d'électrons sont décrites et analysées. On montre qu'une très bonne efficacité de combinaison est possible après optimisation des paramètres de l'injection et des déflecteurs RF. En plus de l'expérience acquise sur l'utilisation de ces déflecteurs, des conclusions importantes pour les futures activités CTF3 et CLIC sont tirées de cette première démonstration de la multiplication de fréquence des paquets d'électrons par injection RF dans un anneau isochrone.<br/><br/>The Compact LInear Collider (CLIC) collaboration studies the possibility of building a multi-TeV (3 TeV centre-of-mass), high-luminosity (1034 cm-2s-1) electron-positron collider for particle physics. The CLIC scheme is based on high-frequency (30 GHz) linear accelerators powered by a low-energy, high-intensity drive beam running parallel to the main linear accelerators (Two-Beam Acceleration concept). One of the main challenges to realize this scheme is to generate the drive beam in a low-frequency accelerator and to achieve the required high-frequency bunch structure needed for the final acceleration. In order to provide bunch frequency multiplication, the main manipulation consists in sending the beam through an isochronous combiner ring using radio-frequency (RF) deflectors to inject and combine electron bunches. However, such a scheme has never been used before, and the first stage of the CLIC Test Facility 3 (CTF3) project aims at a low-charge demonstration of the bunch frequency multiplication by RF injection into an isochronous ring. This proof-of-principle experiment, which was successfully performed at CERN in 2002 using a modified version of the LEP (Large Electron Positron) pre-injector complex, is the central subject of this report. The bunch combination experiment consists in accelerating in a linear accelerator five pulses in which the electron bunches are spaced by 10 cm, and combining them in an isochronous ring to obtain one pulse in which the electron bunches are spaced by 2 cm, thus achieving a bunch frequency multiplication of a factor five, and increasing the charge per pulse by a factor five. The combination is done by means of RF deflecting cavities that create a time-dependent bump inside the ring, thus allowing the interleaving of the bunches of the five pulses. This process imposes several beam dynamics constraints, such as isochronicity, and specific tolerances on the electron bunches that are defined in this report. The design studies of the CTF3 Preliminary Phase are detailed, with emphasis on the novel injection process using RF deflectors. The high power tests performed on the RF deflectors prior to their installation in the ring are also reported. The commissioning activity is presented by comparing beam measurements to model simulations and theoretical expectations. Eventually, the bunch frequency multiplication experiments are described and analysed. It is shown that the process of bunch frequency multiplication is feasible with a very good efficiency after a careful optimisation of the injection and RF deflector parameters. In addition to the experience acquired in the operation of these RF deflectors, important conclusions for future CTF3 and CLIC activities are drawn from this first demonstration of the bunch frequency multiplication by RF injection into an isochronous ring.<br/><br/>La collaboration CLIC (Compact LInear Collider, collisionneur linéaire compact) étudie la possibilité de réaliser un collisionneur électron-positon linéaire à haute énergie (3 TeV) pour la recherche en physique des particules. Le projet CLIC se fonde sur l'utilisation de cavités accélératrices à haute fréquence (30 GHz). La puissance nécessaire à ces cavités est fournie par un faisceau d'électrons de basse énergie et de haut courant, appelé faisceau de puissance, circulant parallèlement à l'accélérateur linéaire principal (procédé appelé « Accélération à Double Faisceau »). Dans ce schéma, un des principaux défis est la réalisation du faisceau de puissance, qui est d'abord généré dans un complexe accélérateur à basse fréquence, puis transformé pour obtenir une structure temporelle à haute fréquence nécessaire à l'alimentation des cavités accélératrices de l'accélérateur linéaire principal. La structure temporelle à haute fréquence des paquets d'électrons est obtenue par le procédé de multiplication de fréquence, dont la manipulation principale consiste à faire circuler le faisceau d'électrons dans un anneau isochrone en utilisant des déflecteurs radio-fréquence (déflecteurs RF) pour injecter et combiner les paquets d'électrons. Cependant, ce type de manipulation n'a jamais été réalisé auparavant et la première phase de la troisième installation de test pour CLIC (CLIC Test Facility 3 ou CTF3) a pour but la démonstration à faible charge du procédé de multiplication de fréquence par injection RF dans un anneau isochrone. L'expérience consiste à accélérer cinq impulsions, puis à les combiner dans un anneau isochrone pour obtenir une seule impulsion dans laquelle la fréquence des paquets d'électrons et le courant sont multipliés par cinq. Cette combinaison est réalisée au moyen de structures déflectrices RF qui créent dans l'anneau une déformation locale et dépendante du temps de la trajectoire du faisceau. Les résultats de cette expérience, qui a été réalisée avec succès au CERN au cours de l?année 2002 en utilisant une version modifiée du pré-injecteur du grand collisionneur électron-positon LEP (Large Electron Positon), sont présentés en détail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of selecting anappropriate wavelet filter is always present in signal compression based on thewavelet transform. In this report, we propose a method to select a wavelet filter from a predefined set of filters for the compression of spectra from a multispectral image. The wavelet filter selection is based on the Learning Vector Quantization (LVQ). In the training phase for the test images, the best wavelet filter for each spectrum has been found by a careful compression-decompression evaluation. Certain spectral features are used in characterizing the pixel spectra. The LVQ is used to form the best wavelet filter class for different types of spectra from multispectral images. When a new image is to be compressed, a set of spectra from that image is selected, the spectra are classified by the trained LVQand the filter associated to the largest class is selected for the compression of every spectrum from the multispectral image. The results show, that almost inevery case our method finds the most suitable wavelet filter from the pre-defined set for the compression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The driving forces of technology and globalization continuously transform the business landscape in a way which undermines the existing strategies and innovations of organizations. The challenge for organizations is to establish such conditions where they are able to create new knowledge for innovative business ideas in interaction between other organizations and individuals. Innovation processes continuously need new external stimulations and seek new ideas, new information and knowledge locating more and more outside traditional organizational boundaries. In several studies, the early phases of the innovation process have been considered as the most critical ones. During these phases, the innovation process can emerge or conclude. External knowledge acquirement and utilization are noticed to be important at this stage of the innovation process giving information about the development of future markets and needs for new innovative businessideas. To make it possible, new methods and approaches to manage proactive knowledge creation and sharing activities are needed. In this study, knowledge creation and sharing in the early phases of the innovation process has been studied, and the understanding of knowledge management in the innovation process in an open and collaborative context advanced. Furthermore, the innovation management methods in this study are combined in a novel way to establish an open innovation process and tested in real-life cases. For these purposes two complementary and sequentially applied group work methods - the heuristic scenario method and the idea generation process - are examined by focusing the research on the support of the open knowledge creation and sharing process. The research objective of this thesis concerns two doctrines: the innovation management including the knowledge management, and the futures research concerning the scenario paradigm. This thesis also applies the group decision support system (GDSS) in the idea generation process to utilize the converged knowledge during the scenario process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Väitöstutkimuksessa on tarkasteltuinfrapunaspektroskopian ja monimuuttujaisten aineistonkäsittelymenetelmien soveltamista kiteytysprosessin monitoroinnissa ja kidemäisen tuotteen analysoinnissa. Parhaillaan kiteytysprosessitutkimuksessa maailmanlaajuisesti tutkitaan intensiivisesti erilaisten mittausmenetelmien soveltamista kiteytysprosessin ilmiöidenjatkuvaan mittaamiseen niin nestefaasista kuin syntyvistä kiteistäkin. Lisäksi tuotteen karakterisointi on välttämätöntä tuotteen laadun varmistamiseksi. Erityisesti lääkeaineiden valmistuksessa kiinnostusta tämäntyyppiseen tutkimukseen edistää Yhdysvaltain elintarvike- ja lääkeaineviraston (FDA) prosessianalyyttisiintekniikoihin (PAT) liittyvä ohjeistus, jossa määritellään laajasti vaatimukset lääkeaineiden valmistuksessa ja tuotteen karakterisoinnissa tarvittaville mittauksille turvallisten valmistusprosessien takaamiseksi. Jäähdytyskiteytyson erityisesti lääketeollisuudessa paljon käytetty erotusmenetelmä kiinteän raakatuotteen puhdistuksessa. Menetelmässä puhdistettava kiinteä raaka-aine liuotetaan sopivaan liuottimeen suhteellisen korkeassa lämpötilassa. Puhdistettavan aineen liukoisuus käytettävään liuottimeen laskee lämpötilan laskiessa, joten systeemiä jäähdytettäessä liuenneen aineen konsentraatio prosessissa ylittää liukoisuuskonsentraation. Tällaiseen ylikylläiseen systeemiin pyrkii muodostumaan uusia kiteitä tai olemassa olevat kiteet kasvavat. Ylikylläisyys on yksi tärkeimmistä kidetuotteen laatuun vaikuttavista tekijöistä. Jäähdytyskiteytyksessä syntyvän tuotteen ominaisuuksiin voidaan vaikuttaa mm. liuottimen valinnalla, jäähdytyprofiililla ja sekoituksella. Lisäksi kiteytysprosessin käynnistymisvaihe eli ensimmäisten kiteiden muodostumishetki vaikuttaa tuotteen ominaisuuksiin. Kidemäisen tuotteen laatu määritellään kiteiden keskimääräisen koon, koko- ja muotojakaumansekä puhtauden perusteella. Lääketeollisuudessa on usein vaatimuksena, että tuote edustaa tiettyä polymorfimuotoa, mikä tarkoittaa molekyylien kykyä järjestäytyä kidehilassa usealla eri tavalla. Edellä mainitut ominaisuudet vaikuttavat tuotteen jatkokäsiteltävyyteen, kuten mm. suodattuvuuteen, jauhautuvuuteen ja tabletoitavuuteen. Lisäksi polymorfiamuodolla on vaikutusta moniin tuotteen käytettävyysominaisuuksiin, kuten esim. lääkeaineen liukenemisnopeuteen elimistössä. Väitöstyössä on tutkittu sulfatiatsolin jäähdytyskiteytystä käyttäen useita eri liuotinseoksia ja jäähdytysprofiileja sekä tarkasteltu näiden tekijöiden vaikutustatuotteen laatuominaisuuksiin. Infrapunaspektroskopia on laajalti kemian alan tutkimuksissa sovellettava menetelmä. Siinä mitataan tutkittavan näytteenmolekyylien värähtelyjen aiheuttamia spektrimuutoksia IR alueella. Tutkimuksessa prosessinaikaiset mittaukset toteutettiin in-situ reaktoriin sijoitettavalla uppoanturilla käyttäen vaimennettuun kokonaisheijastukseen (ATR) perustuvaa Fourier muunnettua infrapuna (FTIR) spektroskopiaa. Jauhemaiset näytteet mitattiin off-line diffuusioheijastukseen (DRIFT) perustuvalla FTIR spektroskopialla. Monimuuttujamenetelmillä (kemometria) voidaan useita satoja, jopa tuhansia muuttujia käsittävä spektridata jalostaa kvalitatiiviseksi (laadulliseksi) tai kvantitatiiviseksi (määrälliseksi) prosessia kuvaavaksi informaatioksi. Väitöstyössä tarkasteltiin laajasti erilaisten monimuuttujamenetelmien soveltamista mahdollisimman monipuolisen prosessia kuvaavan informaation saamiseksi mitatusta spektriaineistosta. Väitöstyön tuloksena on ehdotettu kalibrointirutiini liuenneen aineen konsentraation ja edelleen ylikylläisyystason mittaamiseksi kiteytysprosessin aikana. Kalibrointirutiinin kehittämiseen kuuluivat aineiston hyvyyden tarkastelumenetelmät, aineiston esikäsittelymenetelmät, varsinainen kalibrointimallinnus sekä mallin validointi. Näin saadaan reaaliaikaista informaatiota kiteytysprosessin ajavasta voimasta, mikä edelleen parantaa kyseisen prosessin tuntemusta ja hallittavuutta. Ylikylläisyystason vaikutuksia syntyvän kidetuotteen laatuun seurattiin usein kiteytyskokein. Työssä on esitetty myös monimuuttujaiseen tilastolliseen prosessinseurantaan perustuva menetelmä, jolla voidaan ennustaa spontaania primääristä ytimenmuodostumishetkeä mitatusta spektriaineistosta sekä mahdollisesti päätellä ydintymisessä syntyvä polymorfimuoto. Ehdotettua menetelmää hyödyntäen voidaan paitsi ennakoida kideytimien muodostumista myös havaita mahdolliset häiriötilanteet kiteytysprosessin alkuhetkillä. Syntyvää polymorfimuotoa ennustamalla voidaan havaita ei-toivotun polymorfin ydintyminen,ja mahdollisesti muuttaa kiteytyksen ohjausta halutun polymorfimuodon saavuttamiseksi. Monimuuttujamenetelmiä sovellettiin myös kiteytyspanosten välisen vaihtelun määrittämiseen mitatusta spektriaineistosta. Tämäntyyppisestä analyysistä saatua informaatiota voidaan hyödyntää kiteytysprosessien suunnittelussa ja optimoinnissa. Väitöstyössä testattiin IR spektroskopian ja erilaisten monimuuttujamenetelmien soveltuvuutta kidetuotteen polymorfikoostumuksen nopeaan määritykseen. Jauhemaisten näytteiden luokittelu eri polymorfeja sisältäviin näytteisiin voitiin tehdä käyttäen tarkoitukseen soveltuvia monimuuttujaisia luokittelumenetelmiä. Tämä tarjoaa nopean menetelmän jauhemaisen näytteen polymorfikoostumuksen karkeaan arviointiin, eli siihen mitä yksittäistä polymorfia kyseinen näyte pääasiassa sisältää. Varsinainen kvantitatiivinen analyysi, eli sen selvittäminen paljonko esim. painoprosentteina näyte sisältää eri polymorfeja, vaatii kaikki polymorfit kattavan fysikaalisen kalibrointisarjan, mikä voi olla puhtaiden polymorfien huonon saatavuuden takia hankalaa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of industrial crystallization is to obtain a crystalline product which has the desired crystal size distribution, mean crystal size, crystal shape, purity, polymorphic and pseudopolymorphic form. Effective control of the product quality requires an understanding of the thermodynamics of the crystallizing system and the effects of operation parameters on the crystalline product properties. Therefore, obtaining reliable in-line information about crystal properties and supersaturation, which is the driving force of crystallization, would be very advantageous. Advanced techniques, such asRaman spectroscopy, attenuated total reflection Fourier transform infrared (ATR FTIR) spectroscopy, and in-line imaging techniques, offer great potential for obtaining reliable information during crystallization, and thus giving a better understanding of the fundamental mechanisms (nucleation and crystal growth) involved. In the present work, the relative stability of anhydrate and dihydrate carbamazepine in mixed solvents containing water and ethanol were investigated. The kinetics of the solvent mediated phase transformation of the anhydrate to hydrate in the mixed solvents was studied using an in-line Raman immersion probe. The effects of the operation parameters in terms of solvent composition, temperature and the use of certain additives on the phase transformation kineticswere explored. Comparison of the off-line measured solute concentration and the solid-phase composition measured by in-line Raman spectroscopy allowedthe identification of the fundamental processes during the phase transformation. The effects of thermodynamic and kinetic factors on the anhydrate/hydrate phase of carbamazepine crystals during cooling crystallization were also investigated. The effect of certain additives on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) wasinvestigated. The crystal growth rate of a certain crystal face was determined from images taken with an in-line video microscope. An in-line image processing method was developed to characterize the size and shape of thecrystals. An ATR FTIR and a laser reflection particle size analyzer were used to study the effects of cooling modes and seeding parameters onthe final crystal size distribution of an organic compound C15. Based on the obtained results, an operation condition was proposed which gives improved product property in terms of increased mean crystal size and narrowersize distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper asks a simple question: if humans and their actions co-evolve with hydrological systems (Sivapalan et al., 2012), what is the role of hydrological scientists, who are also humans, within this system? To put it more directly, as traditionally there is a supposed separation of scientists and society, can we maintain this separation as socio-hydrologists studying a socio-hydrological world? This paper argues that we cannot, using four linked sections. The first section draws directly upon the concern of science-technology studies to make a case to the (socio-hydrological) community that we need to be sensitive to constructivist accounts of science in general and socio-hydrology in particular. I review three positions taken by such accounts and apply them to hydrological science, supported with specific examples: (a) the ways in which scientific activities frame socio-hydrological research, such that at least some of the knowledge that we obtain is constructed by precisely what we do; (b) the need to attend to how socio-hydrological knowledge is used in decision-making, as evidence suggests that hydrological knowledge does not flow simply from science into policy; and (c) the observation that those who do not normally label themselves as socio-hydrologists may actually have a profound knowledge of socio-hydrology. The second section provides an empirical basis for considering these three issues by detailing the history of the practice of roughness parameterisation, using parameters like Manning's n, in hydrological and hydraulic models for flood inundation mapping. This history sustains the third section that is a more general consideration of one type of socio-hydrological practice: predictive modelling. I show that as part of a socio-hydrological analysis, hydrological prediction needs to be thought through much more carefully: not only because hydrological prediction exists to help inform decisions that are made about water management; but also because those predictions contain assumptions, the predictions are only correct in so far as those assumptions hold, and for those assumptions to hold, the socio-hydrological system (i.e. the world) has to be shaped so as to include them. Here, I add to the ``normal'' view that ideally our models should represent the world around us, to argue that for our models (and hence our predictions) to be valid, we have to make the world look like our models. Decisions over how the world is modelled may transform the world as much as they represent the world. Thus, socio-hydrological modelling has to become a socially accountable process such that the world is transformed, through the implications of modelling, in a fair and just manner. This leads into the final section of the paper where I consider how socio-hydrological research may be made more socially accountable, in a way that is both sensitive to the constructivist critique (Sect. 1), but which retains the contribution that hydrologists might make to socio-hydrological studies. This includes (1) working with conflict and controversy in hydrological science, rather than trying to eliminate them; (2) using hydrological events to avoid becoming locked into our own frames of explanation and prediction; (3) being empirical and experimental but in a socio-hydrological sense; and (4) co-producing socio-hydrological predictions. I will show how this might be done through a project that specifically developed predictive models for making interventions in river catchments to increase high river flow attenuation. Therein, I found myself becoming detached from my normal disciplinary networks and attached to the co-production of a predictive hydrological model with communities normally excluded from the practice of hydrological science.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ruin occurs the first time when the surplus of a company or an institution is negative. In the Omega model, it is assumed that even with a negative surplus, the company can do business as usual until bankruptcy occurs. The probability of bankruptcy at a point of time only depends on the value of the negative surplus at that time. Under the assumption of Brownian motion for the surplus, the expected discounted value of a penalty at bankruptcy is determined, and hence the probability of bankruptcy. There is an intrinsic relation between the probability of no bankruptcy and an exposure random variable. In special cases, the distribution of the total time the Brownian motion spends below zero is found, and the Laplace transform of the integral of the negative part of the Brownian motion is expressed in terms of the Airy function of the first kind.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metastatic melanomas are frequently refractory to most adjuvant therapies such as chemotherapies and radiotherapies. Recently, immunotherapies have shown good results in the treatment of some metastatic melanomas. Immune cell infiltration in the tumor has been associated with successful immunotherapy. More generally, tumor infiltrating lymphocytes (TILs) in the primary tumor and in metastases of melanoma patients have been demonstrated to correlate positively with favorable clinical outcomes. Altogether, these findings suggest the importance of being able to identify, quantify and characterize immune infiltration at the tumor site for a better diagnostic and treatment choice. In this paper, we used Fourier Transform Infrared (FTIR) imaging to identify and quantify different subpopulations of T cells: the cytotoxic T cells (CD8+), the helper T cells (CD4+) and the regulatory T cells (T reg). As a proof of concept, we investigated pure populations isolated from human peripheral blood from 6 healthy donors. These subpopulations were isolated from blood samples by magnetic labeling and purities were assessed by Fluorescence Activated Cell Sorting (FACS). The results presented here show that Fourier Transform Infrared (FTIR) imaging followed by supervised Partial Least Square Discriminant Analysis (PLS-DA) allows an accurate identification of CD4+ T cells and CD8+ T cells (>86%). We then developed a PLS regression allowing the quantification of T reg in a different mix of immune cells (e.g. Peripheral Blood Mononuclear Cells (PBMCs)). Altogether, these results demonstrate the sensitivity of infrared imaging to detect the low biological variability observed in T cell subpopulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analytical approach for the interpretation of multicomponent heterogeneous adsorption or complexation isotherms in terms of multidimensional affinity spectra is presented. Fourier transform, applied to analyze the corresponding integral equation, leads to an inversion formula which allows the computation of the multicomponent affinity spectrum underlying a given competitive isotherm. Although a different mathematical methodology is used, this procedure can be seen as the extension to multicomponent systems of the classical Sips’s work devoted to monocomponent systems. Furthermore, a methodology which yields analytical expressions for the main statistical properties (mean free energies of binding and covariance matrix) of multidimensional affinity spectra is reported. Thus, the level of binding correlation between the different components can be quantified. It has to be highlighted that the reported methodology does not require the knowledge of the affinity spectrum to calculate the means, variances, and covariance of the binding energies of the different components. Nonideal competitive consistent adsorption isotherm, widely used in metal/proton competitive complexation to environmental macromolecules, and Frumkin competitive isotherms are selected to illustrate the application of the reported results. Explicit analytical expressions for the affinity spectrum as well as for the matrix correlation are obtained for the NICCA case. © 2004 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Depuis leurs premières conceptualisations, les politiques publiques d'accueil systématique d'événements sportifs (PASES) ont beaucoup évolué du fait de la transformation concomitante du sport et des événements, ainsi que l'émergence du concept de marketing territorial. Au cours des dernières décennies, ces politiques publiques se sont popularisées pour ne plus être simplement l'oeuvre de collectivités locales, mais également régionales, voire nationales. Cet article s'intéresse aux principales évolutions des PASES à la lumière de la ville de Lausanne et du canton de Vaud. Bien que la situation lausannoise soit particulière à bien des égards, dû notamment à la présence en Suisse et sur le territoire vaudois du siège de nombreuses fédérations sportives internationales (une soixantaine basée dans le pays, dont notamment le CIO, la FIFA, l'UEFA, etc.), des exemples sont également mobilisés pour d'autres territoires (Monaco, Doha, Londres, Danemark, Russie) afin de montrer que les évolutions lausannoises ne sont pas uniques. L'article entend ainsi donner un panorama des évolutions managériales actuelles qui transforment les PASES en SASES (stratégies d'accueil systématique d'événements sportifs), le cas lausannois servant de fil rouge pour présenter six grandes transitions observables. Abstract Since their first conceptualizations, systematic sports events hosting policies (SSEHP) evolved due to the simultaneous transformation of sport and sports events, as well as the emergence of territorial marketing. In the last decades, the popularity of SSEHP among territorial managers grew dramatically to move from purely local polices to regional and even national policies. This article focuses on the main evolutions of these SSEHP through the case of one city, Lausanne, and its canton, Vaud. Although, Lausanne's situation is particular in many ways, due to the presence throughout the country of many international sports federations (over sixty among which, the IOC, FIFA, UEFA, etc.), examples from other destinations (Monaco, Doha, London, Denmark, Russia) are also used to show that the evolutions observed in Lausanne are not unique. This article aims to give an overview of the major managerial evolutions which transform SSEHP into SSEHS (systematic sports events hosting strategies). The city of Lausanne is used through the article to underline the six significant evolutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

VariScan is a software package for the analysis of DNA sequence polymorphisms at the whole genome scale. Among other features, the software:(1) can conduct many population genetic analyses; (2) incorporates a multiresolution wavelet transform-based method that allows capturing relevant information from DNA polymorphism data; and (3) it facilitates the visualization of the results in the most commonly used genome browsers.