958 resultados para PARTITION
Resumo:
This PhD thesis concerns geochemical constraints on recycling and partial melting of Archean continental crust. A natural example of such processes was found in the Iisalmi area of Central Finland. The rocks from this area are Middle to Late Archean in age and experienced metamorphism and partial melting between 2.7-2.63 Ga. The work is based on extensive field work. It is furthermore founded on bulk rock geochemical data as well as in-situ analyses of minerals. All geochemical data were obtained at the Institute of Geosciences, University of Mainz using X-ray fluorescence, solution ICP-MS and laser ablation-ICP-MS for bulk rock geochemical analyses. Mineral analyses were accomplished by electron microprobe and laser ablation ICP-MS. Fluid inclusions were studied by microscope on a heating-freezing-stage at the Geoscience Center, University Göttingen. Part I focuses on the development of a new analytical method for bulk rock trace element determination by laser ablation-ICP-MS using homogeneous glasses fused from rock powder on an Iridium strip heater. This method is applicable for mafic rock samples whose melts have low viscosities and homogenize quickly at temperatures of ~1200°C. Highly viscous melts of felsic samples prevent melting and homogenization at comparable temperatures. Fusion of felsic samples can be enabled by addition of MgO to the rock powder and adjustment of melting temperature and melting duration to the rock composition. Advantages of the fusion method are low detection limits compared to XRF analyses and avoidance of wet-chemical processing and use of strong acids as in solution ICP-MS as well as smaller sample volumes compared to the other methods. Part II of the thesis uses bulk rock geochemical data and results from fluid inclusion studies for discrimination of melting processes observed in different rock types. Fluid inclusion studies demonstrate a major change in fluid composition from CO2-dominated fluids in granulites to aqueous fluids in TTG gneisses and amphibolites. Partial melts were generated in the dry, CO2-rich environment by dehydration melting reactions of amphibole which in addition to tonalitic melts produced the anhydrous mineral assemblages of granulites (grt + cpx + pl ± amph or opx + cpx + pl + amph). Trace element modeling showed that mafic granulites are residues of 10-30 % melt extraction from amphibolitic precursor rocks. The maximum degree of melting in intermediate granulites was ~10 % as inferred from modal abundances of amphibole, clinopyroxene and orthopyroxene. Carbonic inclusions are absent in upper-amphibolite facies migmatites whereas aqueous inclusion with up to 20 wt% NaCl are abundant. This suggests that melting within TTG gneisses and amphibolites took place in the presence of an aqueous fluid phase that enabled melting at the wet solidus at temperatures of 700-750°C. The strong disruption of pre-metamorphic structures in some outcrops suggests that the maximum amount of melt in TTG gneisses was ~25 vol%. The presence of leucosomes in all rock types is taken as the principle evidence for melt formation. However, mineralogical appearance as well as major and trace element composition of many leucosomes imply that leucosomes seldom represent frozen in-situ melts. They are better considered as remnants of the melt channel network, e.g. ways on which melts escaped from the system. Part III of the thesis describes how analyses of minerals from a specific rock type (granulite) can be used to determine partition coefficients between different minerals and between minerals and melt suitable for lower crustal conditions. The trace element analyses by laser ablation-ICP-MS show coherent distribution among the principal mineral phases independent of rock composition. REE contents in amphibole are about 3 times higher than REE contents in clinopyroxene from the same sample. This consistency has to be taken into consideration in models of lower crustal melting where amphibole is replaced by clinopyroxene in the course of melting. A lack of equilibrium is observed between matrix clinopyroxene / amphibole and garnet porphyroblasts which suggests a late stage growth of garnet and slow diffusion and equilibration of the REE during metamorphism. The data provide a first set of distribution coefficients of the transition metals (Sc, V, Cr, Ni) in the lower crust. In addition, analyses of ilmenite and apatite demonstrate the strong influence of accessory phases on trace element distribution. Apatite contains high amounts of REE and Sr while ilmenite incorporates about 20-30 times higher amounts of Nb and Ta than amphibole. Furthermore, trace element mineral analyses provide evidence for magmatic processes such as melt depletion, melt segregation, accumulation and fractionation as well as metasomatism having operated in this high-grade anatectic area.
Resumo:
Il tema dei prodotti agroalimentari di qualità ha ormai assunto un ruolo rilevante all’interno del dibattito riguardante l’agricoltura e l’economia agroalimentare, ponendosi al centro dell’interesse delle politiche europee (PAC post 2013, Pacchetto Qualità). La crescente attenzione verso le produzioni con marchio Dop\Igp mette però in luce la sostanziale carenza di informazioni dettagliate relativamente alle aziende che operano in questo comparto. Da questo punto di vista il VI° Censimento generale dell’agricoltura costituisce una preziosa fonte di informazioni statistiche. L’obiettivo di questo lavoro è quello di utilizzare i dati, ancora provvisori, del censimento per analizzare la struttura delle aziende con produzioni di qualità, ponendola in confronto con quella delle aziende convenzionali. Inoltre è stata fatta una classificazione delle aziende con prodotti Dop\Igp, in base alla rilevanza di queste produzioni sul reddito lordo aziendale. Le aziende sono quindi state classificate come “Specializzate” e “Miste”, con un’ulteriore distinzione di queste ultime tra quelle “Prevalentemente Dop\Igp” e quelle “Prevalentemente non Dop\Igp”. Tale ripartizione ha consentito una definizione dettagliata degli orientamenti produttivi delle aziende analizzate.
Resumo:
The last decade has witnessed very fast development in microfabrication technologies. The increasing industrial applications of microfluidic systems call for more intensive and systematic knowledge on this newly emerging field. Especially for gaseous flow and heat transfer at microscale, the applicability of conventional theories developed at macro scale is not yet completely validated; this is mainly due to scarce experimental data available in literature for gas flows. The objective of this thesis is to investigate these unclear elements by analyzing forced convection for gaseous flows through microtubes and micro heat exchangers. Experimental tests have been performed with microtubes having various inner diameters, namely 750 m, 510 m and 170 m, over a wide range of Reynolds number covering the laminar region, the transitional zone and also the onset region of the turbulent regime. The results show that conventional theory is able to predict the flow friction factor when flow compressibility does not appear and the effect of fluid temperature-dependent properties is insignificant. A double-layered microchannel heat exchanger has been designed in order to study experimentally the efficiency of a gas-to-gas micro heat exchanger. This microdevice contains 133 parallel microchannels machined into polished PEEK plates for both the hot side and the cold side. The microchannels are 200 µm high, 200 µm wide and 39.8 mm long. The design of the micro device has been made in order to be able to test different materials as partition foil with flexible thickness. Experimental tests have been carried out for five different partition foils, with various mass flow rates and flow configurations. The experimental results indicate that the thermal performance of the countercurrent and cross flow micro heat exchanger can be strongly influenced by axial conduction in the partition foil separating the hot gas flow and cold gas flow.
Resumo:
Crop water requirements are important elements for food production, especially in arid and semiarid regions. These regions are experience increasing population growth and less water for agriculture, which amplifies the need for more efficient irrigation. Improved water use efficiency is needed to produce more food while conserving water as a limited natural resource. Evaporation (E) from bare soil and Transpiration (T) from plants is considered a critical part of the global water cycle and, in recent decades, climate change could lead to increased E and T. Because energy is required to break hydrogen bonds and vaporize water, water and energy balances are closely connected. The soil water balance is also linked with water vapour losses to evapotranspiration (ET) that are dependent mainly on energy balance at the Earth’s surface. This work addresses the role of evapotranspiration for water use efficiency by developing a mathematical model that improves the accuracy of crop evapotranspiration calculation; accounting for the effects of weather conditions, e.g., wind speed and humidity, on crop coefficients, which relates crop evapotranspiration to reference evapotranspiration. The ability to partition ET into Evaporation and Transpiration components will help irrigation managers to find ways to improve water use efficiency by decreasing the ratio of evaporation to transpiration. The developed crop coefficient model will improve both irrigation scheduling and water resources planning in response to future climate change, which can improve world food production and water use efficiency in agriculture.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
In this work I reported recent results in the field of Statistical Mechanics of Equilibrium, and in particular in Spin Glass models and Monomer Dimer models . We start giving the mathematical background and the general formalism for Spin (Disordered) Models with some of their applications to physical and mathematical problems. Next we move on general aspects of the theory of spin glasses, in particular to the Sherrington-Kirkpatrick model which is of fundamental interest for the work. In Chapter 3, we introduce the Multi-species Sherrington-Kirkpatrick model (MSK), we prove the existence of the thermodynamical limit and the Guerra's Bound for the quenched pressure together with a detailed analysis of the annealed and the replica symmetric regime. The result is a multidimensional generalization of the Parisi's theory. Finally we brie y illustrate the strategy of the Panchenko's proof of the lower bound. In Chapter 4 we discuss the Aizenmann-Contucci and the Ghirlanda-Guerra identities for a wide class of Spin Glass models. As an example of application, we discuss the role of these identities in the proof of the lower bound. In Chapter 5 we introduce the basic mathematical formalism of Monomer Dimer models. We introduce a Gaussian representation of the partition function that will be fundamental in the rest of the work. In Chapter 6, we introduce an interacting Monomer-Dimer model. Its exact solution is derived and a detailed study of its analytical properties and related physical quantities is performed. In Chapter 7, we introduce a quenched randomness in the Monomer Dimer model and show that, under suitable conditions the pressure is a self averaging quantity. The main result is that, if we consider randomness only in the monomer activity, the model is exactly solvable.
Resumo:
Il lungo processo che avrebbe portato Ottaviano alla conquista del potere e alla fondazione del principato è scandito da alcuni momenti chiave e da diversi personaggi che ne accompagnarono l'ascesa. Prima ancora che sul campo di battaglia, il figlio adottivo di Cesare riuscì a primeggiare per la grande capacità di gestire alleanze e rapporti personali, per la grande maestria con la quale riuscì a passare da capo rivoluzionario a rappresentante e membro dell'aristocrazia tradizionale. Non fu un cammino facile e lineare e forse il compito più difficile non fu sbaragliare gli avversari ad Azio, ma conservare un potere che gli fu costantemente contestato. Ancora dopo il 31 a.C., infatti, in più di un'occasione, Augusto fu chiamato a difendere la sua creatura (il principato) e a procedere a modificarne costantemente base di potere e struttura: solamente attraverso questa fondamentale, minuziosa, ma nascosta opera, egli riuscì a porre le basi per una struttura di potere destinata a durare immutata almeno per un secolo. In base a queste premesse, la ricerca è organizzata secondo un duplice criterio cronologico, inserendo -all'interno della cornice rappresentata dagli eventi- una partizione che tenga presente ulteriori cesure e momenti determinanti. Il proposito è quello di sottolineare come all'interno di un regno unitario, caratterizzato dalla permanenza di un unico sovrano, sia possibile intravedere l'alternarsi di situazioni storiche diverse, di rapporti di forze, alleanze e unioni in virtù delle quali si determinino orientamenti differenti nell'ambito tanto della politica interna quanto di quella esterna.
Resumo:
Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.
Resumo:
In many patients, optimal results after pallidal deep brain stimulation (DBS) for primary dystonia may appear over several months, possibly beyond 1 year after implant. In order to elucidate the factors predicting such protracted clinical effect, we retrospectively reviewed the clinical records of 44 patients with primary dystonia and bilateral pallidal DBS implants. Patients with fixed skeletal deformities, as well as those with a history of prior ablative procedures, were excluded. The Burke-Fahn-Marsden Dystonia Rating Scale (BFMDRS) scores at baseline, 1 and 3 years after DBS were used to evaluate clinical outcome. All subjects showed a significant improvement after DBS implants (mean BFMDRS improvement of 74.9% at 1 year and 82.6% at 3 years). Disease duration (DD, median 15 years, range 2-42) and age at surgery (AS, median 31 years, range 10-59) showed a significant negative correlation with DBS outcome at 1 and 3 years. A partition analysis, using DD and AS, clustered subjects into three groups: (1) younger subjects with shorter DD (n = 19, AS < 27, DD ? 17); (2) older subjects with shorter DD (n = 8, DD ? 17, AS ? 27); (3) older subjects with longer DD (n = 17, DD > 17, AS ? 27). Younger patients with short DD benefitted more and faster than older patients, who however continued to improve 10% on average 1 year after DBS implants. Our data suggest that subjects with short DD may expect to achieve a better general outcome than those with longer DD and that AS may influence the time necessary to achieve maximal clinical response.
Resumo:
Semi-weak n-hyponormality is defined and studied using the notion of positive determinant partition. Several examples related to semi-weakly n-hyponormal weighted shifts are discussed. In particular, it is proved that there exists a semi-weakly three-hyponormal weighted shift W (alpha) with alpha (0) = alpha (1) < alpha (2) which is not two-hyponormal, which illustrates the gaps between various weak subnormalities.
Resumo:
Binding of hydrophobic chemicals to colloids such as proteins or lipids is difficult to measure using classical microdialysis methods due to low aqueous concentrations, adsorption to dialysis membranes and test vessels, and slow kinetics of equilibration. Here, we employed a three-phase partitioning system where silicone (polydimethylsiloxane, PDMS) serves as a third phase to determine partitioning between water and colloids and acts at the same time as a dosing device for hydrophobic chemicals. The applicability of this method was demonstrated with bovine serum albumin (BSA). Measured binding constants (K(BSAw)) for chlorpyrifos, methoxychlor, nonylphenol, and pyrene were in good agreement with an established quantitative structure-activity relationship (QSAR). A fifth compound, fluoxypyr-methyl-heptyl ester, was excluded from the analysis because of apparent abiotic degradation. The PDMS depletion method was then used to determine partition coefficients for test chemicals in rainbow trout (Oncorhynchus mykiss) liver S9 fractions (K(S9w)) and blood plasma (K(bloodw)). Measured K(S9w) and K(bloodw) values were consistent with predictions obtained using a mass-balance model that employs the octanol-water partition coefficient (K(ow)) as a surrogate for lipid partitioning and K(BSAw) to represent protein binding. For each compound, K(bloodw) was substantially greater than K(S9w), primarily because blood contains more lipid than liver S9 fractions (1.84% of wet weight vs 0.051%). Measured liver S9 and blood plasma binding parameters were subsequently implemented in an in vitro to in vivo extrapolation model to link the in vitro liver S9 metabolic degradation assay to in vivo metabolism in fish. Apparent volumes of distribution (V(d)) calculated from the experimental data were similar to literature estimates. However, the calculated binding ratios (f(u)) used to relate in vitro metabolic clearance to clearance by the intact liver were 10 to 100 times lower than values used in previous modeling efforts. Bioconcentration factors (BCF) predicted using the experimental binding data were substantially higher than the predicted values obtained in earlier studies and correlated poorly with measured BCF values in fish. One possible explanation for this finding is that chemicals bound to proteins can desorb rapidly and thus contribute to metabolic turnover of the chemicals. This hypothesis remains to be investigated in future studies, ideally with chemicals of higher hydrophobicity.
Resumo:
The goal of this paper is to contribute to the understanding of complex polynomials and Blaschke products, two very important function classes in mathematics. For a polynomial, $f,$ of degree $n,$ we study when it is possible to write $f$ as a composition $f=g\circ h$, where $g$ and $h$ are polynomials, each of degree less than $n.$ A polynomial is defined to be \emph{decomposable }if such an $h$ and $g$ exist, and a polynomial is said to be \emph{indecomposable} if no such $h$ and $g$ exist. We apply the results of Rickards in \cite{key-2}. We show that $$C_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,(z-z_{1})(z-z_{2})...(z-z_{n})\,\mbox{is decomposable}\},$$ has measure $0$ when considered a subset of $\mathbb{R}^{2n}.$ Using this we prove the stronger result that $$D_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,\mbox{There exists\,}a\in\mathbb{C}\,\,\mbox{with}\,\,(z-z_{1})(z-z_{2})...(z-z_{n})(z-a)\,\mbox{decomposable}\},$$ also has measure zero when considered a subset of $\mathbb{R}^{2n}.$ We show that for any polynomial $p$, there exists an $a\in\mathbb{C}$ such that $p(z)(z-a)$ is indecomposable, and we also examine the case of $D_{5}$ in detail. The main work of this paper studies finite Blaschke products, analytic functions on $\overline{\mathbb{D}}$ that map $\partial\mathbb{D}$ to $\partial\mathbb{D}.$ In analogy with polynomials, we discuss when a degree $n$ Blaschke product, $B,$ can be written as a composition $C\circ D$, where $C$ and $D$ are finite Blaschke products, each of degree less than $n.$ Decomposable and indecomposable are defined analogously. Our main results are divided into two sections. First, we equate a condition on the zeros of the Blaschke product with the existence of a decomposition where the right-hand factor, $D,$ has degree $2.$ We also equate decomposability of a Blaschke product, $B,$ with the existence of a Poncelet curve, whose foci are a subset of the zeros of $B,$ such that the Poncelet curve satisfies certain tangency conditions. This result is hard to apply in general, but has a very nice geometric interpretation when we desire a composition where the right-hand factor is degree 2 or 3. Our second section of finite Blaschke product results builds off of the work of Cowen in \cite{key-3}. For a finite Blaschke product $B,$ Cowen defines the so-called monodromy group, $G_{B},$ of the finite Blaschke product. He then equates the decomposability of a finite Blaschke product, $B,$ with the existence of a nontrivial partition, $\mathcal{P},$ of the branches of $B^{-1}(z),$ such that $G_{B}$ respects $\mathcal{P}$. We present an in-depth analysis of how to calculate $G_{B}$, extending Cowen's description. These methods allow us to equate the existence of a decomposition where the left-hand factor has degree 2, with a simple condition on the critical points of the Blaschke product. In addition we are able to put a condition of the structure of $G_{B}$ for any decomposable Blaschke product satisfying certain normalization conditions. The final section of this paper discusses how one can put the results of the paper into practice to determine, if a particular Blaschke product is decomposable. We compare three major algorithms. The first is a brute force technique where one searches through the zero set of $B$ for subsets which could be the zero set of $D$, exhaustively searching for a successful decomposition $B(z)=C(D(z)).$ The second algorithm involves simply examining the cardinality of the image, under $B,$ of the set of critical points of $B.$ For a degree $n$ Blaschke product, $B,$ if this cardinality is greater than $\frac{n}{2}$, the Blaschke product is indecomposable. The final algorithm attempts to apply the geometric interpretation of decomposability given by our theorem concerning the existence of a particular Poncelet curve. The final two algorithms can be implemented easily with the use of an HTML
Resumo:
Features encapsulate the domain knowledge of a software system and thus are valuable sources of information for a reverse engineer. When analyzing the evolution of a system, we need to know how and which features were modified to recover both the change intention and its extent, namely which source artifacts are affected. Typically, the implementation of a feature crosscuts a number of source artifacts. To obtain a mapping between features to the source artifacts, we exercise the features and capture their execution traces. However this results in large traces that are difficult to interpret. To tackle this issue we compact the traces into simple sets of source artifacts that participate in a feature's runtime behavior. We refer to these compacted traces as feature views. Within a feature view, we partition the source artifacts into disjoint sets of characterized software entities. The characterization defines the level of participation of a source entity in the features. We then analyze the features over several versions of a system and we plot their evolution to reveal how and hich features were affected by changes in the code. We show the usefulness of our approach by applying it to a case study where we address the problem of merging parallel development tracks of the same system.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
Radiolabeled somatostatin analogues have been successfully used for targeted radiotherapy and for imaging of somatostatin receptor (sst1-5)-positive tumors. Nevertheless, these analogues are subject to improving their tumor-to-nontarget ratio to enhance their diagnostic or therapeutic properties, preventing nephrotoxicity. In order to understand the influence of lipophilicity and charge on the pharmacokinetic profile of [1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA)]-somatostatin-based radioligands such as [DOTA,1-Nal3]-octreotide (DOTA-NOC), different spacers (X) based on 8-amino-3,6-dioxaoctanoic acid (PEG2), 15-amino-4,7,10,13-tetraoxapentadecanoic acid (PEG4), N-acetyl glucosamine (GlcNAc), triglycine, beta-alanine, aspartic acid, and lysine were introduced between the chelator DOTA and the peptide NOC. All DOTA-X-NOC conjugates were synthesized by Fmoc solid-phase synthesis. The partition coefficient (log D) at pH = 7.4 indicated that higher hydrophilicity than [111In-DOTA]-NOC was achieved with the introduction of the mentioned spacers, except with triglycine and beta-alanine. The high affinity of [InIII-DOTA]-NOC for human sst2 (hsst2) was preserved with the structural modifications, while an overall drop for hsst3 affinity was observed, except in the case of [InIII-DOTA]-beta-Ala-NOC. The new conjugates preserved the good affinity for hsst5, except for [InIII-DOTA]-Asn(GlcNAc)-NOC, which showed decreased affinity. A significant 1.2-fold improvement in the specific internalization rate in AR4-2J rat pancreatic tumor cells (sst2 receptor expression) at 4 h was achieved with the introduction of Asp as a spacer in the parent compound. In sst3-expressing HEK cells, the specific internalization rate at 4 h for [111In-DOTA]-NOC (13.1% +/- 0.3%) was maintained with [111In-DOTA]-beta-Ala-NOC (14.0% +/- 1.8%), but the remaining derivatives showed <2% specific internalization. Biodistribution studies were performed with Lewis rats bearing the AR4-2J rat pancreatic tumor. In comparison to [111In-DOTA]-NOC (2.96% +/- 0.48% IA/g), the specific uptake in the tumor at 4 h p.i. was significantly improved for the 111In-labeled sugar analogue (4.17% +/- 0.46% IA/g), which among all the new derivatives presented the best tumor-to-kidney ratio (1.9).