957 resultados para Weighted
Resumo:
Absolute cross-section measurements for valence-shell photoionization of Ar + ions are reported for photon energies ranging from 27.4 eV to 60.0 eV. The data, taken by merging beams of ions and synchrotron radiation at a photon energy resolution of 10 meV, indicate that the primary ion beam was a statistically weighted mixture of the 2P o3/2 ground state and the 2P o1/2 metastable state of Ar +. Photoionization of this Cell-like ion is characterized by multiple Rydberg series of autoionizing resonances superimposed on a direct photoionization continuum. Observed resonance lineshapes indicate interference between indirect and direct photoionization channels. Resonance features are spectroscopically assigned and their energies and quantum defects are tabulated. The measurements are satisfactorily reproduced by theoretical calculations based on an intermediate coupling semi-relativistic Breit-Pauli approximation.
Resumo:
Purpose: To investigate the clinical implications of a variable relative biological effectiveness (RBE) on proton dose fractionation. Using acute exposures, the current clinical adoption of a generic, constant cell killing RBE has been shown to underestimate the effect of the sharp increase in linear energy transfer (LET) in the distal regions of the spread-out Bragg peak (SOBP). However, experimental data for the impact of dose fractionation in such scenarios are still limited.
Methods and Materials: Human fibroblasts (AG01522) at 4 key depth positions on a clinical SOBP of maximum energy 219.65 MeV were subjected to various fractionation regimens with an interfraction period of 24 hours at Proton Therapy Center in Prague, Czech Republic. Cell killing RBE variations were measured using standard clonogenic assays and were further validated using Monte Carlo simulations and parameterized using a linear quadratic formalism.
Results: Significant variations in the cell killing RBE for fractionated exposures along the proton dose profile were observed. RBE increased sharply toward the distal position, corresponding to a reduction in cell sparing effectiveness of fractionated proton exposures at higher LET. The effect was more pronounced at smaller doses per fraction. Experimental survival fractions were adequately predicted using a linear quadratic formalism assuming full repair between fractions. Data were also used to validate a parameterized variable RBE model based on linear α parameter response with LET that showed considerable deviations from clinically predicted isoeffective fractionation regimens.
Conclusions: The RBE-weighted absorbed dose calculated using the clinically adopted generic RBE of 1.1 significantly underestimates the biological effective dose from variable RBE, particularly in fractionation regimens with low doses per fraction. Coupled with an increase in effective range in fractionated exposures, our study provides an RBE dataset that can be used by the modeling community for the optimization of fractionated proton therapy.
Resumo:
This paper examines the connectedness of the Eurozone sovereign debt market over the period 2005–2011. By employing measures built from the variance decompositions of approximating models we are able to define weighted, directed networks that enable a deeper understanding of the relationships between the Eurozone countries. We find that connectedness in the Eurozone was very high during the calm market conditions preceding the global financial crisis but decreased dramatically when the crisis took hold, and worsened as the Eurozone sovereign debt crisis emerged. The drop in connectedness was especially prevalent in the case of the peripheral countries with some of the most peripheral countries deteriorating into isolation. Our results have implications for both market participants and regulators.
Resumo:
Ranking problems arise from the knowledge of several binary relations defined on a set of alternatives, which we intend to rank. In a previous work, the authors introduced a tool to confirm the solutions of multi-attribute ranking problems as linear extensions of a weighted sum of preference relations. An extension of this technique allows the recognition of critical preference pairs of alternatives, which are often caused by inconsistencies. Herein, a confirmation procedure is introduced and applied to confirm the results obtained by a multi-attribute decision methodology on a tender for the supply of buses to the Porto Public Transport Operator. © 2005 Springer Science + Business Media, Inc.
Resumo:
In the past few years a new generation of multifunctional nanoparticles (NPs) has been proposed for biomedical applications, whose structure is more complex than the structure of their predecessor monofunctional counterparts. The development of these novel NPs aims at enabling or improving the performance in imaging, diagnosis and therapeutic applications. The structure of such NPs comprises several components exhibiting various functionalities that enable the nanoparticles to perform multiple tasks simultaneously, such as active targeting of certain cells or compartmentalization, imaging and delivery of active drugs. This thesis presents two types of bimodal bio-imaging probes and describes their physical and chemical properties, namely their texture, structure, and 1H dynamics and relaxometry, in order to evaluate their potential as MRI contrast agents. The photoluminescence properties of these probes are studied, aiming at assessing their interest as optical contrast agents. These materials combine the properties of the trivalent lanthanide (Ln3+) complexes and nanoparticles, offering an excellent solution for bimodal imaging. The designed T1- type contrast agent are SiO2@APS/DTPA:Gd:Ln or SiO2@APS/PMN:Gd:Ln (Ln= Eu or Tb) systems, bearing the active magnetic center (Gd3+) and the optically-active ions (Eu3+ and Tb3+) on the surface of silica NPs. Concerning the relaxometry properties, moderate r1 increases and significant r2 increases are observed in the NPs presence, especially at high magnetic fields, due to susceptibility effects on r2. The Eu3+ ions reside in a single low-symmetry site, and the photoluminescence emission is not influenced by the simultaneous presence of Gd3+ and Eu3+. The presence of Tb3+, rather than Eu3+ ion, further increases r1 but decreases r2. The uptake of these NPs by living cells is fast and results in an intensity increase in the T1-weighted MRI images. The optical features of the NPs in cellular pellets are also studied and confirm the potential of these new nanoprobes as bimodal imaging agents. This thesis further reports on a T2 contrast agent consisting of core-shell NPs with a silica shell surrounding an iron oxide core. The thickness of this silica shell has a significant impact on the r2 and r2* relaxivities, and a tentative model is proposed to explain this finding. The cell viability and the mitochondrial dehydrogenase expression given by the microglial cells are also evaluated.
Resumo:
In this thesis we consider Wiener-Hopf-Hankel operators with Fourier symbols in the class of almost periodic, semi-almost periodic and piecewise almost periodic functions. In the first place, we consider Wiener-Hopf-Hankel operators acting between L2 Lebesgue spaces with possibly different Fourier matrix symbols in the Wiener-Hopf and in the Hankel operators. In the second place, we consider these operators with equal Fourier symbols and acting between weighted Lebesgue spaces Lp(R;w), where 1 < p < 1 and w belongs to a subclass of Muckenhoupt weights. In addition, singular integral operators with Carleman shift and almost periodic coefficients are also object of study. The main purpose of this thesis is to obtain regularity properties characterizations of those classes of operators. By regularity properties we mean those that depend on the kernel and cokernel of the operator. The main techniques used are the equivalence relations between operators and the factorization theory. An invertibility characterization for the Wiener-Hopf-Hankel operators with symbols belonging to the Wiener subclass of almost periodic functions APW is obtained, assuming that a particular matrix function admits a numerical range bounded away from zero and based on the values of a certain mean motion. For Wiener-Hopf-Hankel operators acting between L2-spaces and with possibly different AP symbols, criteria for the semi-Fredholm property and for one-sided and both-sided invertibility are obtained and the inverses for all possible cases are exhibited. For such results, a new type of AP factorization is introduced. Singular integral operators with Carleman shift and scalar almost periodic coefficients are also studied. Considering an auxiliar and simpler operator, and using appropriate factorizations, the dimensions of the kernels and cokernels of those operators are obtained. For Wiener-Hopf-Hankel operators with (possibly different) SAP and PAP matrix symbols and acting between L2-spaces, criteria for the Fredholm property are presented as well as the sum of the Fredholm indices of the Wiener-Hopf plus Hankel and Wiener-Hopf minus Hankel operators. By studying dependencies between different matrix Fourier symbols of Wiener-Hopf plus Hankel operators acting between L2-spaces, results about the kernel and cokernel of those operators are derived. For Wiener-Hopf-Hankel operators acting between weighted Lebesgue spaces, Lp(R;w), a study is made considering equal scalar Fourier symbols in the Wiener-Hopf and in the Hankel operators and belonging to the classes of APp;w, SAPp;w and PAPp;w. It is obtained an invertibility characterization for Wiener-Hopf plus Hankel operators with APp;w symbols. In the cases for which the Fourier symbols of the operators belong to SAPp;w and PAPp;w, it is obtained semi-Fredholm criteria for Wiener-Hopf-Hankel operators as well as formulas for the Fredholm indices of those operators.
Resumo:
Nos últimos anos temos vindo a assistir a uma mudança na forma como a informação é disponibilizada online. O surgimento da web para todos possibilitou a fácil edição, disponibilização e partilha da informação gerando um considerável aumento da mesma. Rapidamente surgiram sistemas que permitem a coleção e partilha dessa informação, que para além de possibilitarem a coleção dos recursos também permitem que os utilizadores a descrevam utilizando tags ou comentários. A organização automática dessa informação é um dos maiores desafios no contexto da web atual. Apesar de existirem vários algoritmos de clustering, o compromisso entre a eficácia (formação de grupos que fazem sentido) e a eficiência (execução em tempo aceitável) é difícil de encontrar. Neste sentido, esta investigação tem por problemática aferir se um sistema de agrupamento automático de documentos, melhora a sua eficácia quando se integra um sistema de classificação social. Analisámos e discutimos dois métodos baseados no algoritmo k-means para o clustering de documentos e que possibilitam a integração do tagging social nesse processo. O primeiro permite a integração das tags diretamente no Vector Space Model e o segundo propõe a integração das tags para a seleção das sementes iniciais. O primeiro método permite que as tags sejam pesadas em função da sua ocorrência no documento através do parâmetro Social Slider. Este método foi criado tendo por base um modelo de predição que sugere que, quando se utiliza a similaridade dos cossenos, documentos que partilham tags ficam mais próximos enquanto que, no caso de não partilharem, ficam mais distantes. O segundo método deu origem a um algoritmo que denominamos k-C. Este para além de permitir a seleção inicial das sementes através de uma rede de tags também altera a forma como os novos centróides em cada iteração são calculados. A alteração ao cálculo dos centróides teve em consideração uma reflexão sobre a utilização da distância euclidiana e similaridade dos cossenos no algoritmo de clustering k-means. No contexto da avaliação dos algoritmos foram propostos dois algoritmos, o algoritmo da “Ground truth automática” e o algoritmo MCI. O primeiro permite a deteção da estrutura dos dados, caso seja desconhecida, e o segundo é uma medida de avaliação interna baseada na similaridade dos cossenos entre o documento mais próximo de cada documento. A análise de resultados preliminares sugere que a utilização do primeiro método de integração das tags no VSM tem mais impacto no algoritmo k-means do que no algoritmo k-C. Além disso, os resultados obtidos evidenciam que não existe correlação entre a escolha do parâmetro SS e a qualidade dos clusters. Neste sentido, os restantes testes foram conduzidos utilizando apenas o algoritmo k-C (sem integração de tags no VSM), sendo que os resultados obtidos indicam que a utilização deste algoritmo tende a gerar clusters mais eficazes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid.
Resumo:
What is the best luminance contrast weighting-function for image quality optimization? Traditionally measured contrast sensitivity functions (CSFs), have been often used as weighting-functions in image quality and difference metrics. Such weightings have been shown to result in increased sharpness and perceived quality of test images. We suggest contextual CSFs (cCSFs) and contextual discrimination functions (cVPFs) should provide bases for further improvement, since these are directly measured from pictorial scenes, modeling threshold and suprathreshold sensitivities within the context of complex masking information. Image quality assessment is understood to require detection and discrimination of masked signals, making contextual sensitivity and discrimination functions directly relevant. In this investigation, test images are weighted with a traditional CSF, cCSF, cVPF and a constant function. Controlled mutations of these functions are also applied as weighting-functions, seeking the optimal spatial frequency band weighting for quality optimization. Image quality, sharpness and naturalness are then assessed in two-alternative forced-choice psychophysical tests. We show that maximal quality for our test images, results from cCSFs and cVPFs, mutated to boost contrast in the higher visible frequencies.
Resumo:
This paper presents a Unit Commitment model with reactive power compensation that has been solved by Genetic Algorithm (GA) optimization techniques. The GA has been developed a computational tools programmed/coded in MATLAB. The main objective is to find the best generations scheduling whose active power losses are minimal and the reactive power to be compensated, subjected to the power system technical constraints. Those are: full AC power flow equations, active and reactive power generation constraints. All constraints that have been represented in the objective function are weighted with a penalty factors. The IEEE 14-bus system has been used as test case to demonstrate the effectiveness of the proposed algorithm. Results and conclusions are dully drawn.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.