958 resultados para Numbers, Divisibility of.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 similar to 30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are four resolvable Steiner triple systems on fifteen elements. Some generalizations of these systems are presented here.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from familiar properties of real numbers. We focus on certain operations of errors which seem not to have been sufficiently studied algebraically. In this work we restrict ourselves to arithmetic operations for errors related to addition and multiplication by scalars. We pay special attention to subtractability-like properties of errors and the induced “distance-like” operation. This operation is implicitly used under different names in several contemporary fields of applied mathematics (inner subtraction and inner addition in interval analysis, generalized Hukuhara difference in fuzzy set theory, etc.) Here we present some new results related to algebraic properties of this operation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In broad terms — including a thief's use of existing credit card, bank, or other accounts — the number of identity fraud victims in the United States ranges 9-10 million per year, or roughly 4% of the US adult population. The average annual theft per stolen identity was estimated at $6,383 in 2006, up approximately 22% from $5,248 in 2003; an increase in estimated total theft from $53.2 billion in 2003 to $56.6 billion in 2006. About three million Americans each year fall victim to the worst kind of identity fraud: new account fraud. Names, Social Security numbers, dates of birth, and other data are acquired fraudulently from the issuing organization, or from the victim then these data are used to create fraudulent identity documents. In turn, these are presented to other organizations as evidence of identity, used to open new lines of credit, secure loans, “flip” property, or otherwise turn a profit in a victim's name. This is much more time consuming — and typically more costly — to repair than fraudulent use of existing accounts. ^ This research borrows from well-established theoretical backgrounds, in an effort to answer the question – what is it that makes identity documents credible? Most importantly, identification of the components of credibility draws upon personal construct psychology, the underpinning for the repertory grid technique, a form of structured interviewing that arrives at a description of the interviewee’s constructs on a given topic, such as credibility of identity documents. This represents substantial contribution to theory, being the first research to use the repertory grid technique to elicit from experts, their mental constructs used to evaluate credibility of different types of identity documents reviewed in the course of opening new accounts. The research identified twenty-one characteristics, different ones of which are present on different types of identity documents. Expert evaluations of these documents in different scenarios suggest that visual characteristics are most important for a physical document, while authenticated personal data are most important for a digital document. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In broad terms — including a thief's use of existing credit card, bank, or other accounts — the number of identity fraud victims in the United States ranges 9-10 million per year, or roughly 4% of the US adult population. The average annual theft per stolen identity was estimated at $6,383 in 2006, up approximately 22% from $5,248 in 2003; an increase in estimated total theft from $53.2 billion in 2003 to $56.6 billion in 2006. About three million Americans each year fall victim to the worst kind of identity fraud: new account fraud. Names, Social Security numbers, dates of birth, and other data are acquired fraudulently from the issuing organization, or from the victim then these data are used to create fraudulent identity documents. In turn, these are presented to other organizations as evidence of identity, used to open new lines of credit, secure loans, “flip” property, or otherwise turn a profit in a victim's name. This is much more time consuming — and typically more costly — to repair than fraudulent use of existing accounts. This research borrows from well-established theoretical backgrounds, in an effort to answer the question – what is it that makes identity documents credible? Most importantly, identification of the components of credibility draws upon personal construct psychology, the underpinning for the repertory grid technique, a form of structured interviewing that arrives at a description of the interviewee’s constructs on a given topic, such as credibility of identity documents. This represents substantial contribution to theory, being the first research to use the repertory grid technique to elicit from experts, their mental constructs used to evaluate credibility of different types of identity documents reviewed in the course of opening new accounts. The research identified twenty-one characteristics, different ones of which are present on different types of identity documents. Expert evaluations of these documents in different scenarios suggest that visual characteristics are most important for a physical document, while authenticated personal data are most important for a digital document.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modeling studies predict that changes in radiocarbon (14C) reservoir ages of surface waters during the last deglacial episode will reflect changes in both atmospheric 14C concentration and ocean circulation including the Atlantic Meridional Overturning Circulation. Tests of these models require the availability of accurate 14C reservoir ages in well-dated late Quaternary time series. We here test two models using plateau-tuned 14C time series in multiple well-placed sediment core age-depth sequences throughout the lower latitudes of the Atlantic Ocean. 14C age plateau tuning in glacial and deglacial sequences provides accurate calendar year ages that differ by as much as 500-2500 years from those based on assumed global reservoir ages around 400 years. This study demonstrates increases in local Atlantic surface reservoir ages of up to 1000 years during the Last Glacial Maximum, ages that reflect stronger trades off Benguela and summer winds off southern Brazil. By contrast, surface water reservoir ages remained close to zero in the Cariaco Basin in the southern Caribbean due to lagoon-style isolation and persistently strong atmospheric CO2 exchange. Later, during the early deglacial (16 ka) reservoir ages decreased to a minimum of 170-420 14C years throughout the South Atlantic, likely in response to the rapid rise in atmospheric pCO2 and Antarctic temperatures occurring then. Changes in magnitude and geographic distribution of 14C reservoir ages of peak glacial and deglacial surface waters deviate from the results of Franke et al. (2008) but are generally consistent with those of the more advanced ocean circulation model of Butzin et al. (2012).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We develop further the new versions of quantum chromatic numbers of graphs introduced by the first and fourth authors. We prove that the problem of computation of the commuting quantum chromatic number of a graph is solvable by an SDP algorithm and describe an hierarchy of variants of the commuting quantum chromatic number which converge to it. We introduce the tracial rank of a graph, a parameter that gives a lower bound for the commuting quantum chromatic number and parallels the projective rank, and prove that it is multiplicative. We describe the tracial rank, the projective rank and the fractional chromatic numbers in a unified manner that clarifies their connection with the commuting quantum chromatic number, the quantum chromatic number and the classical chromatic number, respectively. Finally, we present a new SDP algorithm that yields a parameter larger than the Lovász number and is yet a lower bound for the tracial rank of the graph. We determine the precise value of the tracial rank of an odd cycle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two field experiments were carried out in Taveuni, Fiji to study the effects of mucuna (Mucuna pruriens) and grass fallow systems at 6 and 12 month durations on changes in soil properties (Experiment 1) and taro yields (Experiment 2). Biomass accumulation of mucuna fallow crop was significantly higher (P<0.05) than grass fallow crop at both 6 and 12 month durations. The longer fallow duration resulted in higher (P<0.05) total soil organic carbon, total soil nitrogen and earthworm numbers regardless of fallow type. Weed suppression in taro grown under mucuna was significantly greater (P<0.05) than under natural grass fallow. Taro grown under mucuna fallow significantly outyielded taro grown under grass fallow (11.8 vs. 8.8 t ha-1). Also, the gross margin of taro grown under mucuna fallow was 52% higher than that of taro grown under grass fallow. © ISHS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

City cadastral street map showing lot/tract lines, lot numbers, names of owners of rural tracts, building coverage, ward boundaries, and ward numbers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We explore here the acceleration of convergence of iterative methods for the solution of a class of quasilinear and linear algebraic equations. The specific systems are the finite difference form of the Navier-Stokes equations and the energy equation for recirculating flows. The acceleration procedures considered are: the successive over relaxation scheme; several implicit methods; and a second-order procedure. A new implicit method—the alternating direction line iterative method—is proposed in this paper. The method combines the advantages of the line successive over relaxation and alternating direction implicit methods. The various methods are tested for their computational economy and accuracy on a typical recirculating flow situation. The numerical experiments show that the alternating direction line iterative method is the most economical method of solving the Navier-Stokes equations for all Reynolds numbers in the laminar regime. The usual ADI method is shown to be not so attractive for large Reynolds numbers because of the loss of diagonal dominance. This loss can however be restored by a suitable choice of the relaxation parameter, but at the cost of accuracy. The accuracy of the new procedure is comparable to that of the well-tested successive overrelaxation method and to the available results in the literature. The second-order procedure turns out to be the most efficient method for the solution of the linear energy equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PWM waveforms with positive voltage transition at the positive zero crossing of the fundamental voltage (type-A) are generally considered for PWM waveform with even number of switching angles per quarter whereas, waveforms with negative voltage transition at the positive zero crossing (type-B) are considered for odd number of switching angles per quarter. Optimal PWM, for minimization of total harmonic distortion of line to line (VWTHD), is generally solved with the aforementioned criteria. This paper establishes that a combination of both types of waveforms gives better performance than any individual type in terms of minimum VWTHD for complete range of modulation index (M). Optimal PWM for minimum VWTHD is solved for PWM waveforms with pulse numbers (P) of 5 and 7. Both type-A and type-B waveforms are found to be better in different ranges of M. The theoretical findings are confirmed through simulation and experimental results on a 3.7 kW squirrel cage induction motor in an open-loop V/f drive. Further, the optimal PWM is analysed from a space vector point of view.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho aborda, de maneira bem sucinta e objetiva, a história da evolução dos números desde o primeiro risco em um osso, até chegar na forma atual como os conhecemos. Ao longo de aproximadamente 30.000 anos de existência, os sistemas de numeração, suas bases e representações sofreram inúmeras modificações, adequando-se ao contexto histórico vigente. Podemos citar a mentalidade científica da época, a necessidade da conquista de territórios, religiões e crenças e necessidades básicas da vida cotidiana. Deste modo, mostramos uma corrente histórica que tenta explicar como e porque a ideia de número se modifica com o tempo, sempre tendo em vista os fatores que motivaram tais mudanças e quais benefícios (ou malefícios) trouxeram consigo. Com um capítulo dedicado a cada uma das mais importantes civilizações que contribuíram para o crescimento da matemática e, sempre que possível, em ordem cronológica de acontecimentos, o leitor consegue ter uma boa ideia de como uma civilização influencia a outra e como um povo posterior pôde apoiar-se nos conhecimentos adquiridos dos antepassados para produzir seus próprios algorítimos e teoremas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

What is meant by the term random? Do we understand how to identify which type of randomisation to use in our future research projects? We, as researchers, often explain randomisation to potential research participants as being a 50/50 chance of selection to either an intervention or control group, akin to drawing numbers out of a hat. Is this an accurate explanation? And are all methods of randomisation equal? This paper aims to guide the researcher through the different techniques used to randomise participants with examples of how they can be used in educational research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Let f(x) be a complex rational function. In this work, we study conditions under which f(x) cannot be written as the composition of two rational functions which are not units under the operation of function composition. In this case, we say that f(x) is prime. We give sufficient conditions for complex rational functions to be prime in terms of their degrees and their critical values, and we derive some conditions for the case of complex polynomials. We consider also the divisibility of integral polynomials, and we present a generalization of a theorem of Nieto. We show that if f(x) and g(x) are integral polynomials such that the content of g divides the content of f and g(n) divides f(n) for an integer n whose absolute value is larger than a certain bound, then g(x) divides f(x) in Z[x]. In addition, given an integral polynomial f(x), we provide a method to determine if f is irreducible over Z, and if not, find one of its divisors in Z[x].