983 resultados para Iowa Auditor General
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Stability results are given for a class of feedback systems arising from the regulation of time-varying discrete-time systems using optimal infinite-horizon and moving-horizon feedback laws. The class is characterized by joint constraints on the state and the control, a general nonlinear cost function and nonlinear equations of motion possessing two special properties. It is shown that weak conditions on the cost function and the constraints are sufficient to guarantee uniform asymptotic stability of both the optimal infinite-horizon and movinghorizon feedback systems. The infinite-horizon cost associated with the moving-horizon feedback law approaches the optimal infinite-horizon cost as the moving horizon is extended.
Resumo:
All protein-encoding genes in eukaryotes are transcribed into messenger RNA (mRNA) by RNA Polymerase II (RNAP II), whose activity therefore needs to be tightly controlled. An important and only partially understood level of regulation is the multiple phosphorylations of RNAP II large subunit C-terminal domain (CTD). Sequential phosphorylations regulate transcription initiation and elongation, and recruit factors involved in co-transcriptional processing of mRNA. Based largely on studies in yeast models and in vitro, the kinase activity responsible for the phosphorylation of the serine-5 (Ser5) residues of RNAP II CTD has been attributed to the Mat1/Cdk7/CycH trimer as part of Transcription Factor IIH. However, due to the lack of good mammalian genetic models, the roles of both RNAP II Ser5 phosphorylation as well as TFIIH kinase in transcription have provided ambiguous results and the in vivo kinase of Ser5 has remained elusive. The primary objective of this study was to elucidate the role of mammalian TFIIH, and specifically the Mat1 subunit in CTD phosphorylation and general RNAP II-mediated transcription. The approach utilized the Cre-LoxP system to conditionally delete murine Mat1 in cardiomyocytes and hepatocytes in vivo and and in cell culture models. The results identify the TFIIH kinase as the major mammalian Ser5 kinase and demonstrate its requirement for general transcription, noted by the use of nascent mRNA labeling. Also a role for Mat1 in regulating general mRNA turnover was identified, providing a possible rationale for earlier negative findings. A secondary objective was to identify potential gene- and tissue-specific roles of Mat1 and the TFIIH kinase through the use of tissue-specific Mat1 deletion. Mat1 was found to be required for the transcriptional function of PGC-1 in cardiomyocytes. Transriptional activation of lipogenic SREBP1 target genes following Mat1 deletion in hepatocytes revealed a repressive role for Mat1apparently mediated via co-repressor DMAP1 and the DNA methyltransferase Dnmt1. Finally, Mat1 and Cdk7 were also identified as a negative regulators of adipocyte differentiation through the inhibitory phosphorylation of Peroxisome proliferator-activated receptor (PPAR) γ. Together, these results demonstrate gene- and tissue-specific roles for the Mat1 subunit of TFIIH and open up new therapeutic possibilities in the treatment of diseases such as type II diabetes, hepatosteatosis and obesity.
Resumo:
The concept of short range strong spin-two (f) field (mediated by massive f-mesons) and interacting directly with hadrons was introduced along with the infinite range (g) field in early seventies. In the present review of this growing area (often referred to as strong gravity) we give a general relativistic treatment in terms of Einstein-type (non-abelian gauge) field equations with a coupling constant Gf reverse similar, equals 1038 GN (GN being the Newtonian constant) and a cosmological term λf ƒ;μν (ƒ;μν is strong gravity metric and λf not, vert, similar 1028 cm− is related to the f-meson mass). The solutions of field equations linearized over de Sitter (uniformly curves) background are capable of having connections with internal symmetries of hadrons and yielding mass formulae of SU(3) or SU(6) type. The hadrons emerge as de Sitter “microuniverses” intensely curved within (radius of curvature not, vert, similar10−14 cm).The study of spinor fields in the context of strong gravity has led to Heisenberg's non-linear spinor equation with a fundamental length not, vert, similar2 × 10−14 cm. Furthermore, one finds repulsive spin-spin interaction when two identical spin-Image particles are in parallel configuration and a connection between weak interaction and strong gravity.Various other consequences of strong gravity embrace black hole (solitonic) solutions representing hadronic bags with possible quark confinement, Regge-like relations between spins and masses, connection with monopoles and dyons, quantum geons and friedmons, hadronic temperature, prevention of gravitational singularities, providing a physical basis for Dirac's two metric and large numbers hypothesis and projected unification with other basic interactions through extended supergravity.
Resumo:
Side chain bromination of aromatic amidomethylated compounds yields aldehydes.
Resumo:
In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.
Resumo:
A Geodesic Constant Method (GCM) is outlined which provides a common approach to ray tracing on quadric cylinders in general, and yields all the surface ray-geometric parameters required in the UTD mutual coupling analysis of conformal antenna arrays in the closed form. The approach permits the incorporation of a shaping parameter which permits the modeling of quadric cylindrical surfaces of desired sharpness/flatness with a common set of equations. The mutual admittance between the slots on a general parabolic cylinder is obtained as an illustration of the applicability of the GCM.
Resumo:
A general model of a foam bed reactor has been developed which rigorously accounts for the extent of gas absorption with chemical reaction occurring in both the storage and foam sections. Its applicability extends to a wide spectrum of reaction velocities. The possibilities of the predominance of the bulk-liquid reaction in the storage section or the absorption with reaction in the foam section can be handled as merely special cases of the general analysis. The importance of foam for carrying out a particular gas-liquid reaction is characterised by a criterion in terms of the fractional rate of reaction in the foam section. Trends of variations in the concentrations of dissolved free A, solute B, and gas-phase A with time of operation of the reactor are presented. The nature of the variation in the fractional rate of reaction in the foam section with time, at different reaction velocities, and the effect of the liquid flow rate (across the storage section) on the transience are also illustrated. Finally, the predictions of the general model have been validated using the available experimental data on the oxidation of sodium sulphide in a foam bed reactor. The agreement between the experimental and the present theoretical information is fairly good, apart from being more insightful than all the previous models of this reactor.
Resumo:
By using the lower bound limit analysis in conjunction with finite elements and linear programming, the bearing capacity factors due to cohesion, surcharge and unit weight, respectively, have been computed for a circular footing with different values of phi. The recent axisymmetric formulation proposed by the authors under phi = 0 condition, which is based on the concept that the magnitude of the hoop stress (sigma(theta)) remains closer to the least compressive normal stress (sigma(3)), is extended for a general c-phi soil. The computational results are found to compare quite well with the available numerical results from literature. It is expected that the study will be useful for solving various axisymmetric geotechnical stability problems. Copyright (C) 2010 John Wiley & Sons, Ltd.