929 resultados para Uniformly
Resumo:
We examine the use of randomness extraction and expansion in key agreement (KA) pro- tocols to generate uniformly random keys in the standard model. Although existing works provide the basic theorems necessary, they lack details or examples of appropriate cryptographic primitives and/or parameter sizes. This has lead to the large amount of min-entropy needed in the (non-uniform) shared secret being overlooked in proposals and efficiency comparisons of KA protocols. We therefore summa- rize existing work in the area and examine the security levels achieved with the use of various extractors and expanders for particular parameter sizes. The tables presented herein show that the shared secret needs a min-entropy of at least 292 bits (and even more with more realistic assumptions) to achieve an overall security level of 80 bits using the extractors and expanders we consider. The tables may be used to �nd the min-entropy required for various security levels and assumptions. We also �nd that when using the short exponent theorems of Gennaro et al., the short exponents may need to be much longer than they suggested.
Resumo:
Most forms of tissue healing depend critically on revascularisation. In soft tissues and in vitro, mechanical stimuli have been shown to promote vessel-forming activity. However, in bone defects, increased interfragmentary motion impairs vascular regeneration. Because these effects seem contradictory, we aimed to determine whether a range of mechanical stimuli exists in which angiogenesis is favoured. A series of cyclic strain magnitudes were applied to a Matrigel-based “tube formation” assay and the total lengths of networks formed by human microvascular endothelial cells measured at 24 h. Network lengths were reduced at all strain levels, compared to unstretched controls. However, the levels of pro-angiogenic matrix metalloproteases-2 and -9 in the corresponding conditioned media were unchanged by strain, and vascular endothelial growth factor was uniformly elevated in stretched conditions. By repeating the assay with the addition of conditioned media from mesenchymal stem cells cultivated in similar conditions, paracrine stimuli were shown to increase network lengths, but not to alter the negative effect of cyclic stretching. Together, these results demonstrate that directly applied periodic strains can inhibit endothelial organisation in vitro, and suggest that this may be due to physical disruption rather than biochemical modulation. Most importantly, the results indicate that the straining of endothelial cells and their assembly into vascular-like structures must be studied simultaneously to adequately characterise the mechanical influence on vessel formation.
Resumo:
Dispersion characteristics of respiratory droplets in indoor environments are of special interest in controlling transmission of airborne diseases. This study adopts an Eulerian method to investigate the spatial concentration distribution and temporal evolution of exhaled and sneezed/coughed droplets within the range of 1.0~10.0μm in an office room with three air distribution methods, i.e. mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). The diffusion, gravitational settling, and deposition mechanism of particulate matters are well accounted in the one-way coupling Eulerian approach. The simulation results find that exhaled droplets with diameters up to 10.0μm from normal respiration process are uniformly distributed in MV, while they are trapped in the breathing height by thermal stratifications in DV and UFAD, resulting in a high droplet concentration and a high exposure risk to other occupants. Sneezed/coughed droplets are diluted much slower in DV/UFAD than in MV. Low air speed in the breathing zone in DV/UFAD can lead to prolonged residence of droplets in the breathing zone.
Resumo:
Background and Objective: As global warming continues, the frequency, intensity and duration of heatwaves are likely to increase. However, a heatwave is unlikely to be defined uniformly because acclimatisation plays a significant role in determining the heat-related impact. This study investigated how to best define a heatwave in Brisbane, Australia. Methods: Computerised datasets on daily weather, air pollution and health outcomes between 1996 and 2005 were obtained from pertinent government agencies. Paired t-tests and case-crossover analyses were performed to assess the relationship between heatwaves and health outcomes using different heatwave definitions. Results: The maximum temperature was as high as 41.5°C with a mean maximum daily temperature of 26.3°C. None of the five commonly-used heatwave definitions suited Brisbane well on the basis of the health effects of heatwaves. Additionally, there were pros and cons when locally-defined definitions were attempted using either a relative or absolute definition for extreme temperatures. Conclusion: The issue of how to best define a heatwave is complex. It is important to identify an appropriate definition of heatwave locally and to understand its health effects.
Resumo:
This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.
Resumo:
Although systemic androgen deprivation prolongs life in advanced prostate cancer, remissions are temporary because patients almost uniformly progress to a state of a castration-resistant prostate cancer (CRPC) as indicated by recurring PSA. This complex process of progression does not seem to be stochastic as the timing and phenotype are highly predictable, including the observation that most androgen-regulated genes are reactivated despite castrate levels of serum androgens. Recent evidence indicates that intraprostatic levels of androgens remain moderately high following systemic androgen deprivation therapy, whereas the androgen receptor (AR) remains functional, and silencing the AR expression following castration suppresses tumor growth and blocks the expression of genes known to be regulated by androgens. From these observations, we hypothesized that CRPC progression is not independent of androgen-driven activity and that androgens may be synthesized de novo in CRPC tumors leading to AR activation. Using the LNCaP xenograft model, we showed that tumor androgens increase during CRPC progression in correlation to PSA up-regulation. We show here that all enzymes necessary for androgen synthesis are expressed in prostate cancer tumors and some seem to be up-regulated during CRPC progression. Using an ex vivo radiotracing assays coupled to high-performance liquid chromatography-radiometric/mass spectrometry detection, we show that tumor explants isolated from CRPC progression are capable of de novo conversion of [(14)C]acetic acid to dihydrotestosterone and uptake of [(3)H]progesterone allows detection of the production of six other steroids upstream of dihydrotestosterone. This evidence suggests that de novo androgen synthesis may be a driving mechanism leading to CRPC progression following castration.
Resumo:
Network induced delay in networked control systems (NCS) is inherently non-uniformly distributed and behaves with multifractal nature. However, such network characteristics have not been well considered in NCS analysis and synthesis. Making use of the information of the statistical distribution of NCS network induced delay, a delay distribution based stochastic model is adopted to link Quality-of-Control and network Quality-of-Service for NCS with uncertainties. From this model together with a tighter bounding technology for cross terms, H∞ NCS analysis is carried out with significantly improved stability results. Furthermore, a memoryless H∞ controller is designed to stabilize the NCS and to achieve the prescribed disturbance attenuation level. Numerical examples are given to demonstrate the effectiveness of the proposed method.
Resumo:
This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.
Resumo:
This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.
Resumo:
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Resumo:
Australian privacy law regulates how government agencies and private sector organisations collect, store and use personal information. A coherent conceptual basis of personal information is an integral requirement of information privacy law as it determines what information is regulated. A 2004 report conducted on behalf of the UK’s Information Commissioner (the 'Booth Report') concluded that there was no coherent definition of personal information currently in operation because different data protection authorities throughout the world conceived the concept of personal information in different ways. The authors adopt the models developed by the Booth Report to examine the conceptual basis of statutory definitions of personal information in Australian privacy laws. Research findings indicate that the definition of personal information is not construed uniformly in Australian privacy laws and that different definitions rely upon different classifications of personal information. A similar situation is evident in a review of relevant case law. Despite this, the authors conclude the article by asserting that a greater jurisprudential discourse is required based on a coherent conceptual framework to ensure the consistent development of Australian privacy law.
Resumo:
This study investigated a novel drug delivery system (DDS), consisting of polycaprolactone (PCL) or polycaprolactone 20% tricalcium phosphate (PCL-TCP) biodegradable scaffolds, fibrin Tisseel sealant and recombinant bone morphogenetic protein-2 (rhBMP-2) for bone regeneration. PCL and PCL-TCP-fibrin composites displayed a loading efficiency of 70% and 43%, respectively. Fluorescence and scanning electron microscopy revealed sparse clumps of rhBMP-2 particles, non-uniformly distributed on the rods’ surface of PCL-fibrin composites. In contrast, individual rhBMP-2 particles were evident and uniformly distributed on the rods’ surface of the PCL-TCP-fibrin composites. PCL-fibrin composites loaded with 10 and 20 μg/ml rhBMP-2 demonstrated a triphasic release profile as quantified by an enzyme-linked immunosorbent assay (ELISA). This consisted of burst releases at 2 h, and days 7 and 16. A biphasic release profile was observed for PCL-TCP-fibrin composites loaded with 10 μg/ml rhBMP-2, consisting of burst releases at 2 h and day 14. PCL-TCP-fibrin composites loaded with 20 μg/ml rhBMP-2 showed a tri-phasic release profile, consisting of burst releases at 2 h, and days 10 and 21. We conclude that the addition of TCP caused a delay in rhBMP-2 release. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and alkaline phosphatase assay verified the stability and bioactivity of eluted rhBMP-2 at all time points
Resumo:
Engineered tissue grafts, which mimic the spatial variations of cell density and extracellular matrix present in native tissues, could facilitate more efficient tissue regeneration and integration. We previously demonstrated that cells could be uniformly seeded throughout a 3D scaffold having a random pore architecture using a perfusion bioreactor2. In this work, we aimed to generate 3D constructs with defined cell distributions based on rapid prototyped scaffolds manufactured with a controlled gradient in porosity. Computational models were developed to assess the influence of fluid flow, associated with pore architecture and perfusion regime, on the resulting cell distribution.
Resumo:
Stereotypes of salespeople are common currency in US media outlets and research suggests that these stereotypes are uniformly negative. However, there is no reason to expect that stereotypes will be consistent across cultures. The present paper provides the first empirical examination of salesperson stereotypes in an Asian country, specifically Taiwan. Using accepted psychological methods, Taiwanese salesperson stereotypes are found to be twofold, with a negative stereotype being quite congruent with existing US stereotypes, but also a positive stereotype, which may be related to the specific culture of Taiwan.
Resumo:
In the UK, Singapore, Canada, New Zealand and Australia, as in many other jurisdictions, charity law is rooted in the common law and anchored on the Statute of Charitable Uses 1601. The Pemsel classification of charitable purposes was uniformly accepted, and together with a shared and growing pool of judicial precedents, aided by the ‘spirit and intendment’ rule, has subsequently allowed the law to develop along much the same lines. In recent years, all the above jurisdictions have embarked on law reform processes designed to strengthen regulatory processes and to statutorily define and encode common law concepts. The reform outcomes are now to be found in a batch of national charity statutes which reflect interesting differences in the extent to which their respective governments have been prepared to balance the modernising of charitable purposes and other common law concepts alongside the customary concern to tighten the regulatory framework.