923 resultados para Tibetan coded character set extension B
Resumo:
Arbuscular mycorrhizal (AM) fungi (Order Glomales, Class Zygomycetes) are a diverse group of soil fungi that form mutualistic associations with the roots of most species of higher plants. Despite intensive study over the past 25 years, the phylogenetic relationships among AM fungi, and thus many details of evolution of the symbiosis, remain unclear. Cladistic analysis was performed on fatty acid methyl ester (FAME) profiles of 15 species in Gigaspora and Scutellospora (family Gigasporaceae) by using a restricted maximum likelihood approach of continuous character data. Results were compared to a parsimony analysis of spore morphological characters of the same species. Only one tree was generated from each character set. Morphological and developmental data suggest that species with the simplest spore types are ancestral whereas those with complicated inner wall structures are derived. Spores of those species having a complex wall structure pass through stages of development identical to the mature stages of simpler spores, suggesting a pattern of classical Haeckelian recapitulation in evolution of spore characters. Analysis of FAME profiles supported this hypothesis when Glomus leptotichum was used as the outgroup. However, when Glomus etunicatum was chosen as the outgroup, the polarity of the entire tree was reversed. Our results suggest that FAME profiles contain useful information and provide independent criteria for generating phylogenetic hypotheses in AM fungi. The maximum likelihood approach to analyzing FAME profiles also may prove useful for many other groups of organisms in which profiles are empirically shown to be stable and heritable.
Resumo:
International market access for fresh commodities is regulated by international accepted phytosanitary guidelines, the objectives of which are to reduce the biosecurity risk of plant pest and disease movement. Papua New Guinea (PNG) has identified banana as a potential export crop and to help meet international market access requirements, this thesis provides information for the development of a pest risk analysis (PRA) for PNG banana fruit. The PRA is a three step process which first identifies the pests associated with a particular commodity or pathway, then assesses the risk associated with those pests, and finally identifies risk management options for those pests if required. As the first step of the PRA process, I collated a definitive list on the organisms associated with the banana plant in PNG using formal literature, structured interviews with local experts, grey literature and unpublished file material held in PNG field research stations. I identified 112 organisms (invertebrates, vertebrate, pathogens and weeds) associated with banana in PNG, but only 14 of these were reported as commonly requiring management. For these 14 I present detailed information summaries on their known biology and pest impact. A major finding of the review was that of the 14 identified key pests, some research information occurs for 13. The single exception for which information was found to be lacking was Bactrocera musae (Tryon), the banana fly. The lack of information for this widely reported ‘major pest on PNG bananas’ would hinder the development of a PNG banana fruit PRA. For this reason the remainder of the thesis focused on this organism, particularly with respect to generation of information required by the PRA process. Utilising an existing, but previously unanalysed fruit fly trapping database for PNG, I carried out a Geographic Information System analysis of the distribution and abundance of banana in four major regions of PNG. This information is required for a PRA to determine if banana fruit grown in different parts of the country are at different risks from the fly. Results showed that the fly was widespread in all cropping regions and that temperature and rainfall were not significantly correlated with banana fly abundance. Abundance of the fly was significantly correlated (albeit weakly) with host availability. The same analysis was done with four other PNG pest fruit flies and their responses to the environmental factors differed to banana fly and each other. This implies that subsequent PRA analyses for other PNG fresh commodities will need to investigate the risk of each of these flies independently. To quantify the damage to banana fruit caused by banana fly in PNG, local surveys and one national survey of banana fruit infestation were carried out. Contrary to expectations, infestation was found to be very low, particularly in the widely grown commercial cultivar, Cavendish. Infestation of Cavendish fingers was only 0.41% in a structured, national survey of over 2 700 banana fingers. Follow up laboratory studies showed that fingers of Cavendish, and another commercial variety Lady-finger, are very poor hosts for B. musae, with very low host selection rates by female flies and very poor immature survival. An analysis of a recent (within last decade) incursion of B. musae into the Gazelle Peninsula of East New Britain Province, PNG, provided the final set of B. musae data. Surveys of the fly on the peninsular showed that establishment and spread of the fly in the novel environment was very rapid and thus the fly should be regarded as being of high biosecurity concern, at least in tropical areas. Supporting the earlier impact studies, however, banana fly has not become a significant banana fruit problem on the Gazelle, despite bananas being the primary starch staple of the region. The results of the research chapters are combined in the final Discussion in the form of a B. musae focused PRA for PNG banana fruit. Putting the thesis in a broader context, the Discussion also deals with the apparent discrepancy between high local abundance of banana fly and very low infestation rates. This discussion focuses on host utilisation patterns of specialist herbivores and suggests that local pest abundance, as determined by trapping or monitoring, need not be good surrogate for crop damage, despite this linkage being implicit in a number of international phytosanitary protocols.
Resumo:
Background Accumulated biological research outcomes show that biological functions do not depend on individual genes, but on complex gene networks. Microarray data are widely used to cluster genes according to their expression levels across experimental conditions. However, functionally related genes generally do not show coherent expression across all conditions since any given cellular process is active only under a subset of conditions. Biclustering finds gene clusters that have similar expression levels across a subset of conditions. This paper proposes a seed-based algorithm that identifies coherent genes in an exhaustive, but efficient manner. Methods In order to find the biclusters in a gene expression dataset, we exhaustively select combinations of genes and conditions as seeds to create candidate bicluster tables. The tables have two columns: (a) a gene set, and (b) the conditions on which the gene set have dissimilar expression levels to the seed. First, the genes with less than the maximum number of dissimilar conditions are identified and a table of these genes is created. Second, the rows that have the same dissimilar conditions are grouped together. Third, the table is sorted in ascending order based on the number of dissimilar conditions. Finally, beginning with the first row of the table, a test is run repeatedly to determine whether the cardinality of the gene set in the row is greater than the minimum threshold number of genes in a bicluster. If so, a bicluster is outputted and the corresponding row is removed from the table. Repeating this process, all biclusters in the table are systematically identified until the table becomes empty. Conclusions This paper presents a novel biclustering algorithm for the identification of additive biclusters. Since it involves exhaustively testing combinations of genes and conditions, the additive biclusters can be found more readily.
Resumo:
In this article, we aim at reducing the error rate of the online Tamil symbol recognition system by employing multiple experts to reevaluate certain decisions of the primary support vector machine classifier. Motivated by the relatively high percentage of occurrence of base consonants in the script, a reevaluation technique has been proposed to correct any ambiguities arising in the base consonants. Secondly, a dynamic time-warping method is proposed to automatically extract the discriminative regions for each set of confused characters. Class-specific features derived from these regions aid in reducing the degree of confusion. Thirdly, statistics of specific features are proposed for resolving any confusions in vowel modifiers. The reevaluation approaches are tested on two databases (a) the isolated Tamil symbols in the IWFHR test set, and (b) the symbols segmented from a set of 10,000 Tamil words. The recognition rate of the isolated test symbols of the IWFHR database improves by 1.9 %. For the word database, the incorporation of the reevaluation step improves the symbol recognition rate by 3.5 % (from 88.4 to 91.9 %). This, in turn, boosts the word recognition rate by 11.9 % (from 65.0 to 76.9 %). The reduction in the word error rate has been achieved using a generic approach, without the incorporation of language models.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
Exchange reactions between molecular complexes and excess acid or base are well known and have been extensively surveyed in the literature(l). Since the exchange mechanism will, in some way involve the breaking of the labile donor-acceptor bond, it follows that a discussion of the factors relating to bonding in molecular complexes will be relevant. In general, a strong Lewis base and a strong Lewis acid form a stable adduct provided that certain stereochemical requirements are met. A strong Lewis base has the following characteristics (1),(2) (i) high electron density at the donor site. (ii) a non-bonded electron pair which has a low ionization potential (iii) electron donating substituents at the donor atom site. (iv) facile approach of the site of the Lewis base to the acceptor site as dictated by the steric hindrance of the substituents. Examples of typical Lewis bases are ethers, nitriles, ketones, alcohols, amines and phosphines. For a strong Lewis acid, the following properties are important:( i) low electron density at the acceptor site. (ii) electron withdrawing substituents. (iii) substituents which do not interfere with the close approach of the Lewis base. (iv) availability of a vacant orbital capable of accepting the lone electron pair of the donor atom. Examples of Lewis acids are the group III and IV halides such (M=B, AI, Ga, In) and MX4 - (M=Si, Ge, Sn, Pb). The relative bond strengths of molecular complexes have been investigated by:- (i) (ii) (iii) (iv) (v] (vi) dipole moment measurements (3). shifts of the carbonyl peaks in the IIIR. (4) ,(5), (6) .. NMR chemical shift data (4),(7),(8),(9). D.V. and visible spectrophotometric shifts (10),(11). equilibrium constant data (12), (13). heats of dissociation and heats of reactions (l~), (16), (17), (18), (19). Many experiments have bben carried out on boron trihalides in order to determine their relative acid strengths. Using pyridine, nitrobenzene, acetonitrile and trimethylamine as reference Lewis bases, it was found that the acid strength varied in order:RBx3 > BC1 3 >BF 3 • For the acetonitrile-boron trihalide and trimethylamine boron trihalide complexes in nitrobenzene, an-NMR study (7) showed that the shift to lower field was. greatest for the BB~3 adduct ~n~ smallest for the BF 3 which is in agreement with the acid strengths. If electronegativities of the substituents were the only important effect, and since c~ Br ,one would expect the electron density at the boron nucleus to vary as BF3<BC1~ BBr 3 and therefore, the acid strength would vary as BF~BC1)BBr3: However, for the boron trihalides, the trend is in the opposite direction as determined experimentally. Considerable back-bonding (20), (21) between the halogen and the boron atoms has been proposed as the predominating factor, i.e. ~rt- back-bond between a lone electron pair on the halogen and the vacant orbital on the boron site. The degree of back-bonding varies inversely as the bo~on halogen distance and one would therefore expect the B-F bond to exhibit greater back-bonding character than the B-Cl or B-Br bonds. Since back-bonding transfers electron density from substituent to the boron atom site, this process would be expected to weaken the Lewis acid strength. This explains the Lewis acid strength increasing in the order BF 3 BC1 3 BBr 3 . When the acetonitrile boron trihalide complex is formed, the boron atom undergoes ~_cbange of hybridization from sp2 to sp3. From a linear relationship between the heat of formation of ethyl acetate adducts and the shift in the carbonyl I.R. stretch, Drago (22) et al have proposed that the angular di~tortion of the X-B-X bonds from sp2 (12 ) to sp3 (10 hybridization is proportional to the amount of charge transferred, i.e. to the nature of the base, and they have rejected the earlier concept of reorganization energy in explaining the formation of the adduct bond (19).
Resumo:
Alverata: a typeface design for Europe This typeface is a response to the extraordinarily diverse forms of letters of the Latin alphabet in manuscripts and inscriptions in the Romanesque period (c. 1000–1200). While the Romanesque did provide inspiration for architectural lettering in the nineteenth century, these letterforms have not until now been systematically considered and redrawn as a working typeface. The defining characteristic of the Romanesque letterform is variety: within an individual inscription or written text, letters such as A, C, E and G might appear with different forms at each appearance. Some of these forms relate to earlier Roman inscriptional forms and are therefore familiar to us, but others are highly geometric and resemble insular and uncial forms. The research underlying the typeface involved the collection of a large number of references for lettering of this period, from library research and direct on-site ivestigation. This investigation traced the wide dispersal of the Romanesque lettering tradition across the whole of Europe. The variety of letter widths and weights encountered, as well as variant shapes for individual letters, offered both direct models and stylistic inspiration for the characters and for the widths and weight variants of the typeface. The ability of the OpenType format to handle multiple stylistic variants of any one character has been exploited to reflect the multiplicity of forms available to stonecutters and scribes of the period. To make a typeface that functions in a contemporary environment, a lower case has been added, and formal and informal variants supported. The pan-European nature of the Romanesque design tradition has inspired an pan-European approach to the character set of the typeface, allowing for text composition in all European languages, and the typeface has been extended into Greek and Cyrillic, so that the broadest representation of European languages can be achieved.
Resumo:
The representation in online environments of non-Roman-based script languages has proved problematic. During the initial years of Computer-mediated Communication, the American Standard Code for Information Interchange character set only supported Roman-alphabeted languages. The solution for speakers of languages written in non-Roman scripts was to employ unconventional writing systems, in an effort to represent their native language in online discourse. The first aim of this chapter is to present the different ways that internet users choose to transliterate or even transcribe their native languages online, using Roman characters. With technological development, and consequently the availability of various writing scripts online, internet users now have the option to either use Roman characters or their native script. If the latter is chosen, internet users still seem to deviate from conventional ways of writing, in this case, however, with regards to spelling. The second aim, therefore, is to bring into light recent developments, by looking at the ways that internet users manipulate orthography, to achieve their communicative purposes.
Resumo:
The morphology of terebelliform polychaetes was investigated for a phylogenetic study focused on Terebellidae. For this study, specimens belonging to 147 taxa, preferably type material or specimens from type localities or areas close to them, were examined under stereo, light and scanning electron microscopes. The taxa examined were 1 Pectinariidae, 2 Ampharetidae, 2 Alvinellidae, 8 Trichobranchidae, and 134 Terebellidae, which included 8 Polycirrinae, 15 Thelepodinae, and 111 Terebellinae. A comparison of the morphology, including prostomium, peristomium, anterior segments and lobes, branchiae, glandular venter, nephridial and genital papillae, notopodia and notochaetae, neuropodia and neurochaetae, and posterior end, was made of all the currently recognized families of terebelliform polychaetes, with special emphasis on Terebellidae. A discussion of the characters useful to distinguish between genera is given. This character set will be used in a subsequent phylogenetic study (Nogueira & Hutchings in prep.)
Resumo:
Este trabalho teve como objetivo identificar zonas diferenciadas de manejo por meio de indicadores de fertilidade em Latossolos cultivados com cana-de-açúcar, utilizando-se a técnica de krigagem indicatriz, com vistas em buscar um aprimoramento na utilização de técnicas de agricultura de precisão. O estudo foi realizado em uma área de 90 ha pertencente a uma área maior de 1.900 ha, localizada em Jaboticabal, Estado de São Paulo, Brasil (21 ° 15 ' S e 48 ° 18 ' W). Na área experimental, foi estabelecida uma malha de amostragem regular de 50 m, com 420 pontos. em cada ponto, coletou-se uma amostra de solo, na qual foram determinados os teores de matéria orgânica, P, K disponíveis e o valor de V. Os dados obtidos para estas características do solo, assim como as combinações entre elas, foram codificados em valores indicadores 0 ou 1, conforme as variáveis apresentassem valores acima ou abaixo do valor de corte estabelecido, escolhido para cada variável, respectivamente. Os resultados propiciaram a confecção de mapas de probabilidade de cada variável (individual e combinada), o que possibilitou identificar regiões com diferentes níveis de fertilidade do solo na área experimental.
Resumo:
Bamboos often negatively affect tree recruitment, survival, and growth, leading to arrested tree regeneration in forested habitats. Studies so far have focused on the effects of bamboos on the performance of seedlings and saplings, but the influence of bamboos on forest dynamics may start very early in the forest regeneration process by altering seed rain patterns. We tested the prediction that the density and composition of the seed rain are altered and seed limitation is higher in stands of Guadua tagoara (B or bamboo stands), a large-sized woody bamboo native from the Brazilian Atlantic Forest, compared to forest patches without bamboos (NB or non-bamboo stands). Forty 1 m(2) seed traps were set in B and NB stands, and the seed rain was monitored monthly for 1 year. The seed rain was not greatly altered by the presence of bamboos: rarefied seed species richness was higher for B stands, patterns of dominance and density of seeds were similar between stands, and differences in overall composition were slight. Seed limitation, however, was greater at B stands, likely as a resulted of reduced tree density. Despite Such reduced density, the presence of trees growing amidst and over the bamboos seems to play a key role in keeping the seeds falling in B stands because they serve as food sources for frugivores or simply as perches for them. The loss of such trees may lead to enhanced seed limitation, contributing ultimately to the self-perpetuating bamboo disturbance cycle. (C) 2008 Elsevier B,V. All rights reserved.
Resumo:
Esta pesquisa foi realizada com o objetivo de identificar e analisar as relações entre as linguagens da Matemática e da Informática no contexto da sala de aula, a partir da inserção das tecnologias informáticas na aprendizagem da Função Quadrática. Nesse sentido, os conceitos que envolvem a forma algébrica e forma gráfica desta função, foram observados pelos alunos ao explorar aspectos dinâmicos na interface do Geogebra. A fundamentação teórica da pesquisa, foi subsidiada pelas ideias de Pierre Lévy sobre as tecnologias da inteligência na disseminação da informação e do conhecimento, bem como pelas contribuições filosóficas de Ludwig Wittgenstein acerca do jogo de linguagem. A metodologia da pesquisa possui caráter qualitativo definido a partir de critérios específicos acerca do objeto de estudo e dos sujeitos investigados. As informações foram obtidas por meio de questões específicas aplicadas em dois momentos, a saber: antes e após a realização de um minicurso sobre o GeoGebra. As análises das questões revelaram que os aspectos visuais e os movimentos no uso do computador, estabelecem relações entre as formas algébricas e gráficas da função quadrática. Assim, eles puderam perceber que os coeficientes numéricos modificam a parábola e isso dá sentido aos conceitos estudados. O uso do GeoGebra possibilita outras formas de aprendizagem evidenciadas entre o Jogo de Linguagem da Matemática e o Jogo de Linguagem da Informática no âmbito da Educação Matemática.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The goal of this paper is to contribute to the understanding of complex polynomials and Blaschke products, two very important function classes in mathematics. For a polynomial, $f,$ of degree $n,$ we study when it is possible to write $f$ as a composition $f=g\circ h$, where $g$ and $h$ are polynomials, each of degree less than $n.$ A polynomial is defined to be \emph{decomposable }if such an $h$ and $g$ exist, and a polynomial is said to be \emph{indecomposable} if no such $h$ and $g$ exist. We apply the results of Rickards in \cite{key-2}. We show that $$C_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,(z-z_{1})(z-z_{2})...(z-z_{n})\,\mbox{is decomposable}\},$$ has measure $0$ when considered a subset of $\mathbb{R}^{2n}.$ Using this we prove the stronger result that $$D_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,\mbox{There exists\,}a\in\mathbb{C}\,\,\mbox{with}\,\,(z-z_{1})(z-z_{2})...(z-z_{n})(z-a)\,\mbox{decomposable}\},$$ also has measure zero when considered a subset of $\mathbb{R}^{2n}.$ We show that for any polynomial $p$, there exists an $a\in\mathbb{C}$ such that $p(z)(z-a)$ is indecomposable, and we also examine the case of $D_{5}$ in detail. The main work of this paper studies finite Blaschke products, analytic functions on $\overline{\mathbb{D}}$ that map $\partial\mathbb{D}$ to $\partial\mathbb{D}.$ In analogy with polynomials, we discuss when a degree $n$ Blaschke product, $B,$ can be written as a composition $C\circ D$, where $C$ and $D$ are finite Blaschke products, each of degree less than $n.$ Decomposable and indecomposable are defined analogously. Our main results are divided into two sections. First, we equate a condition on the zeros of the Blaschke product with the existence of a decomposition where the right-hand factor, $D,$ has degree $2.$ We also equate decomposability of a Blaschke product, $B,$ with the existence of a Poncelet curve, whose foci are a subset of the zeros of $B,$ such that the Poncelet curve satisfies certain tangency conditions. This result is hard to apply in general, but has a very nice geometric interpretation when we desire a composition where the right-hand factor is degree 2 or 3. Our second section of finite Blaschke product results builds off of the work of Cowen in \cite{key-3}. For a finite Blaschke product $B,$ Cowen defines the so-called monodromy group, $G_{B},$ of the finite Blaschke product. He then equates the decomposability of a finite Blaschke product, $B,$ with the existence of a nontrivial partition, $\mathcal{P},$ of the branches of $B^{-1}(z),$ such that $G_{B}$ respects $\mathcal{P}$. We present an in-depth analysis of how to calculate $G_{B}$, extending Cowen's description. These methods allow us to equate the existence of a decomposition where the left-hand factor has degree 2, with a simple condition on the critical points of the Blaschke product. In addition we are able to put a condition of the structure of $G_{B}$ for any decomposable Blaschke product satisfying certain normalization conditions. The final section of this paper discusses how one can put the results of the paper into practice to determine, if a particular Blaschke product is decomposable. We compare three major algorithms. The first is a brute force technique where one searches through the zero set of $B$ for subsets which could be the zero set of $D$, exhaustively searching for a successful decomposition $B(z)=C(D(z)).$ The second algorithm involves simply examining the cardinality of the image, under $B,$ of the set of critical points of $B.$ For a degree $n$ Blaschke product, $B,$ if this cardinality is greater than $\frac{n}{2}$, the Blaschke product is indecomposable. The final algorithm attempts to apply the geometric interpretation of decomposability given by our theorem concerning the existence of a particular Poncelet curve. The final two algorithms can be implemented easily with the use of an HTML
Resumo:
Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.
Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.
To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.
All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.