974 resultados para Convexity in Graphs
Resumo:
This project was performed at Rochester Institute of Technology to get more understanding and knowledge about AM and FM screenings similarities and differences with considerations of the mottle. By designing a test form conformed to the specific measurements and printing it on Heidelberg's Sunday 2000 press, the project group has evaluated the questions that already existed and the ones that occurred during the project. Hence the first press run left some unexpected phenomenon therefore another press run was performed. Measurements were performed and graphs produced in Excel. The project group evaluated the results and from that able to establish facts and draw conclusions. It has been a great experience for the project group and they have learnt a lot.
Resumo:
Objective: We present a new evaluation of levodopa plasma concentrations and clinical effects during duodenal infusion of a levodopa/carbidopa gel (Duodopa ) in 12 patients with advanced Parkinson s disease (PD), from a study reported previously (Nyholm et al, Clin Neuropharmacol 2003; 26(3): 156-163). One objective was to investigate in what state of PD we can see the greatest benefits with infusion compared with corresponding oral treatment (Sinemet CR). Another objective was to identify fluctuating response to levodopa and correlate to variables related to disease progression. Methods: We have computed mean absolute error (MAE) and mean squared error (MSE) for the clinical rating from -3 (severe parkinsonism) to +3 (severe dyskinesia) as measures of the clinical state over the treatment periods of the study. Standard deviation (SD) of the rating was used as a measure of response fluctuations. Linear regression and visual inspection of graphs were used to estimate relationships between these measures and variables related to disease progression such as years on levodopa (YLD) or unified PD rating scale part II (UPDRS II).Results: We found that MAE for infusion had a strong linear correlation to YLD (r2=0.80) while the corresponding relation for oral treatment looked more sigmoid, particularly for the more advanced patients (YLD>18).
Resumo:
The open provenance architecture (OPA) approach to the challenge was distinct in several regards. In particular, it is based on an open, well-defined data model and architecture, allowing different components of the challenge workflow to independently record documentation, and for the workflow to be executed in any environment. Another noticeable feature is that we distinguish between the data recorded about what has occurred, emphprocess documentation, and the emphprovenance of a data item, which is all that caused the data item to be as it is and is obtained as the result of a query over process documentation. This distinction allows us to tailor the system to separately best address the requirements of recording and querying documentation. Other notable features include the explicit recording of causal relationships between both events and data items, an interaction-based world model, intensional definition of data items in queries rather than relying on explicit naming mechanisms, and emphstyling of documentation to support non-functional application requirements such as reducing storage costs or ensuring privacy of data. In this paper we describe how each of these features aid us in answering the challenge provenance queries.
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
As a highly urbanized and flood prone region, Flanders has experienced multiple floods causing significant damage in the past. In response to the floods of 1998 and 2002 the Flemish Environment Agency, responsible for managing 1 400 km of unnavigable rivers, started setting up a real time flood forecasting system in 2003. Currently the system covers almost 2 000 km of unnavigable rivers, for which flood forecasts are accessible online (www.waterinfo.be). The forecasting system comprises more than 1 000 hydrologic and 50 hydrodynamic models which are supplied with radar rainfall, rainfall forecasts and on-site observations. Forecasts for the next 2 days are generated hourly, while 10 day forecasts are generated twice a day. Additionally, twice daily simulations based on percentile rainfall forecasts (from EPS predictions) result in uncertainty bands for the latter. Subsequent flood forecasts use the most recent rainfall predictions and observed parameters at any time while uncertainty on the longer-term is taken into account. The flood forecasting system produces high resolution dynamic flood maps and graphs at about 200 river gauges and more than 3 000 forecast points. A customized emergency response system generates phone calls and text messages to a team of hydrologists initiating a pro-active response to prevent upcoming flood damage. The flood forecasting system of the Flemish Environment Agency is constantly evolving and has proven to be an indispensable tool in flood crisis management. This was clearly the case during the November 2010 floods, when the agency issued a press release 2 days in advance allowing water managers, emergency services and civilians to take measures.
Resumo:
In this note, in an independent private values auction framework, I discuss the relationship between the set of types and the distribution of types. I show that any set of types, finite dimensional or not, can be extended to a larger set of types preserving incentive compatibility constraints, expected revenue and bidder’s expected utilities. Thus for example we may convexify a set of types making our model amenable to the large body of theory in economics and mathematics that relies on convexity assumptions. An interesting application of this extension procedure is to show that although revenue equivalence is not valid in general if the set of types is not convex these mechanism have underlying distinct allocation mechanism in the extension. Thus we recover in these situations the revenue equivalence.
Resumo:
Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.
Resumo:
This work analyses the waveshapes of continuing currents and parameters of M-components in positive cloud-to-ground (CG) flashes through high-speed GPS synchronized videos. The dataset is composed of only long continuing currents (with duration longer than 40 ms) and was selected from more than 800 flashes recorded in Sao Jose dos Campos (45.864 degrees W, 23.215 degrees S) and Uruguaiana (29.806 degrees W, 57.005 degrees S) in Southeast and South of Brazil, respectively, during 2003 to 2007 summers. The videos are compared with data obtained by the Brazilian Lightning Location System (BrasilDAT) in order to determine the polarity of each flash and select only positive cases. There are only two studies of waveshapes of continuing currents in the literature. One is based on direct current measurements of triggered lightning, in which four different types of waveshapes were observed; and the other is based on measurements of luminosity variations in high-speed videos of CG negative lightning, in which besides the four types above mentioned two additional types were observed. The present work is an extension of the latter, using the same method but now applied to obtain the waveshapes of positive CG lightning. As far as the authors know, this is the first report on M-components in positive continuing currents. We also have used the luminosity-versus-time graphs to observe their occurrence and measure some parameters (duration, elapsed time and time between two successive M-components), whose statistics are presented and compared in detail to the data for negative flashes. We have plotted a histogram of the M-components elapsed time over the total duration of the continuing current for positive flashes, which presented an exponential decay (correlation coefficient: 0.83), similar to what has been observed for negative flashes. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Block diagrams and signal-flow graphs are used to represent and to obtain the transfer function of interconnected systems. The reduction of signal-flow graphs is considered simpler than the reduction of block diagrams for systems with complex interrelationships. Signal-flow graphs reduction can be made without graphic manipulations of diagrams, and it is attractive for a computational implementation. In this paper the authors propose a computational method for direct reduction of signal-flow graphs. This method uses results presented in this paper about the calculation of literal determinants without symbolic mathematics tools. The Cramer's rule is applied for the solution of a set of linear equations, A program in MATLAB language for reduction of signal-flow graphs with the proposed method is presented.
Resumo:
OBJETIVO: avaliar a resistência de braquetes metálicos colados em dentes humanos com resina polimerizada com luz halógena por meio de ensaios mecânicos de cisalhamento. METODOLOGIA: para este estudo foram realizados ensaios in vivo com dinamômetro portátil digital e in vitro com máquina de ensaios mecânicos universal com e sem termociclagem, complementado pelo Índice de Adesivo Remanescente (IAR). Braquetes Edgewise Standard (Abzil) foram colados utilizando adesivo Transbond Plus Self Etching Primer (SEP) e Resina Transbond XT. Foram formados 3 grupos com 10 dentes em cada um deles. No GI os braquetes foram colados nos segundos pré-molares dos pacientes. Nos GII e GIII utilizaram-se primeiros pré-molares extraídos por motivos ortodônticos. Os ensaios mecânicos do GI foram realizados 24 horas após a polimerização diretamente na boca dos pacientes com dinamômetro portátil digital. No GII os corpos-de-prova foram armazenados em água destilada e levados à estufa a 37ºC durante 24 horas e, posteriormente, submetidos à termociclagem, com 1000 ciclos a 5 e 55ºC. No GIII os corpos-de-prova foram armazenados em água destilada em temperatura ambiente por 24 horas e posteriormente submetidos aos ensaios mecânicos. RESULTADOS: os valores médios da resistência ao cisalhamento em Megapascal foram de: GI = 4,39; GII = 7,11 e GIII = 7,35. Após a descolagem foram realizadas fotografias das áreas de colagem, tanto dos dentes submetidos a testes in vivo quanto in vitro e ampliadas 5x para facilitar a visualização. As imagens obtidas foram analisadas, classificadas de acordo com o IAR e, por meio de gráficos de dispersão, foi verificada a relação entre a resistência ao cisalhamento e este índice. CONCLUSÃO: a média dos ensaios mecânicos realizados in vivo foi estatisticamente menor em relação aos ensaios in vitro. Não houve diferenças na resistência ao cisalhamento in vitro entre o grupo termociclado e o não-termociclado. Não houve relação entre tensão de ruptura e tipo de falha.
Resumo:
In this paper, we extend the use of the variance dispersion graph (VDG) to experiments in which the response surface (RS) design must be blocked. Through several examples we evaluate the prediction performances of RS designs in non-orthogonal block designs compared with the equivalent unblocked designs and orthogonally blocked designs. These examples illustrate that good prediction performance of designs in small blocks can be expected in practice. Most importantly, we show that the allocation of the treatment set to blocks can seriously affect the prediction properties of designs; thus, much care is needed in performing this allocation.
Resumo:
Let C-n(lambda)(x), n = 0, 1,..., lambda > -1/2, be the ultraspherical (Gegenbauer) polynomials, orthogonal. in (-1, 1) with respect to the weight function (1 - x(2))(lambda-1/2). Denote by X-nk(lambda), k = 1,....,n, the zeros of C-n(lambda)(x) enumerated in decreasing order. In this short note, we prove that, for any n is an element of N, the product (lambda + 1)(3/2)x(n1)(lambda) is a convex function of lambda if lambda greater than or equal to 0. The result is applied to obtain some inequalities for the largest zeros of C-n(lambda)(x). If X-nk(alpha), k = 1,...,n, are the zeros of Laguerre polynomial L-n(alpha)(x), also enumerated in decreasing order, we prove that x(n1)(lambda)/(alpha + 1) is a convex function of alpha for alpha > - 1. (C) 2002 Published by Elsevier B.V. B.V.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)