923 resultados para Tibetan coded character set extension B
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.
Resumo:
The decision of the District Court of Queensland in Mark Treherne & Associates -v- Murray David Hopkins [2010] QDC 36 will have particular relevance for early career lawyers. This decision raises questions about the limits of the jurisdiction of judicial registrars in the Magistrates Court.
Resumo:
Objective: To comprehensively measure the burden of hepatitis B, liver cirrhosis and liver cancer in Shandong province, using disability-adjusted life years (DALYs) to estimate the disease burden attribute to hepatitis B virus (HBV)infection. Methods: Based on the mortality data of hepatitis B, liver cirrhosis and liver cancer derived from the third National Sampling Retrospective Survey for Causes of Death during 2004 and 2005, the incidence data of hepatitis B and the prevalence and the disability weights of liver cancer gained from the Shandong Cancer Prevalence Sampling Survey in 2007, we calculated the years of life lost (YLLs), years lived with disability (YLDs) and DALYs of three diseases following the procedures developed for the global burden of disease (GBD) study to ensure the comparability. Results: The total burden for hepatitis B, liver cirrhosis and liver cancer were 211 616 (39 377 YLLs and 172 239 YLDs), 16 783 (13 497 YLLs and 3286 YLDs) and 247 795 (240 236 YLLs and 7559 YLDs) DALYs in 2005 respectively, and men were 2.19, 2.36 and 3.16 times as that for women, respectively in Shandong province. The burden for hepatitis B was mainly because of disability (81.39%). However, most burden on liver cirrhosis and liver cancer were due to premature death (80.42% and 96.95%). The burden of each patient related to hepatitis B, liver cirrhosis and liver cancer were 4.8, 13.73 and 11.11 respectively. Conclusion: Hepatitis B, liver cirrhosis and liver cancer caused considerable burden to the people living in Shandong province, indicating that the control of hepatitis B virus infection would bring huge potential benefits.
Resumo:
Design-Build (DB) system has been widely adopted overseas but it has not received the same popularity yet in the People’s Republic of China. The selection of design-build variant is regarded as one of the critical obstacles to the application of this alternative. This paper investigates categories of design-build variants in the construction market of China. The develop-and-construction, enhanced design-build, traditional-design-build and engineering procurement-construction (EPC) are the four current designbuild variants adopted by clients. Each of them is developed to meet a varying set of circumstances and has its own advantages and disadvantages. The develop-and-construction is mostly used in large, complex projects in housing industry and it will guarantee client’s great control over the project while still leave some design room for the contractor. The traditional-design-build and enhanced-design-build systems are mostly applied in projects that are comparatively simple, small-scale, and the DB contractors will have greater control of the projects. The EPC is the extension of pure design-build method and is widely adopted in the petrochemical, metallurgical and electronic fields because of the high-technique requirements and the necessity for one entity to control the design, construction, procurement and commissioning etc. Four corresponding design-build projects are also presented in this paper in order to better illustrate the operational process and provide the insight for understanding the design-build variants in Mainland China.
Resumo:
As the international community struggles to find a cost-effective solution to mitigate climate change and reduce greenhouse gas emissions, carbon capture and storage (CCS) has emerged as a project mechanism with the potential to assist in transitioning society towards its low carbon future. Being a politically attractive option, legal regimes to promote and approve CCS have proceeded at an accelerated pace in multiple jurisdictions including the European Union and Australia. This acceleration and emphasis on the swift commercial deployment of CCS projects has left the legal community in the undesirable position of having to advise on the strengths and weaknesses of the key features of these regimes once they have been passed and become operational. This is an area where environmental law principles are tested to their very limit. On the one hand, implementation of this new technology should proceed in a precautionary manner to avoid adverse impacts on the atmosphere, local community and broader environment. On the other hand, excessive regulatory restrictions will stifle innovation and act as a barrier to the swift deployment of CCS projects around the world. Finding the balance between precaution and innovation is no easy feat. This is an area where lawyers, academics, regulators and industry representatives can benefit from the sharing of collective experiences, both positive and negative, across the jurisdictions. This exemplary book appears to have been collated with this philosophy in mind and provides an insightful addition to the global dialogue on establishing effective national and international regimes for the implementation of CCS projects...
Resumo:
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.
Resumo:
Positive and negative small ions, aerosol ion and number concentration and dc electric fields were monitored at an overhead high-voltage power line site. We show that the emission of corona ions was not spatially uniform along the lines and occurred from discrete components such as a particular set of spacers. Maximum ion concentrations and atmospheric dc electric fields were observed at a point 20 m downwind of the lines. It was estimated that less than 7% of the total number of aerosol particles was charged. The electrical parameters decreased steadily with further downwind distance but remained significantly higher than background.
Resumo:
Consider the concept combination ‘pet human’. In word association experiments, human subjects produce the associate ‘slave’ in relation to this combination. The striking aspect of this associate is that it is not produced as an associate of ‘pet’, or ‘human’ in isolation. In other words, the associate ‘slave’ seems to be emergent. Such emergent associations sometimes have a creative character and cognitive science is largely silent about how we produce them. Departing from a dimensional model of human conceptual space, this article will explore concept combinations, and will argue that emergent associations are a result of abductive reasoning within conceptual space, that is, below the symbolic level of cognition. A tensor-based approach is used to model concept combinations allowing such combinations to be formalized as interacting quantum systems. Free association norm data is used to motivate the underlying basis of the conceptual space. It is shown by analogy how some concept combinations may behave like quantum-entangled (non-separable) particles. Two methods of analysis were presented for empirically validating the presence of non-separable concept combinations in human cognition. One method is based on quantum theory and another based on comparing a joint (true theoretic) probability distribution with another distribution based on a separability assumption using a chi-square goodness-of-fit test. Although these methods were inconclusive in relation to an empirical study of bi-ambiguous concept combinations, avenues for further refinement of these methods are identified.
Resumo:
Several studies of the surface effect on bending properties of a nanowire (NW) have been conducted. However, these analyses are mainly based on theoretical predictions, and there is seldom integration study in combination between theoretical predictions and simulation results. Thus, based on the molecular dynamics (MD) simulation and different modified beam theories, a comprehensive theoretical and numerical study for bending properties of nanowires considering surface/intrinsic stress effects and axial extension effect is conducted in this work. The discussion begins from the Euler-Bernoulli beam theory and Timoshenko beam theory augmented with surface effect. It is found that when the NW possesses a relatively small cross-sectional size, these two theories cannot accurately interpret the true surface effect. The incorporation of axial extension effect into Euler-Bernoulli beam theory provides a nonlinear solution that agrees with the nonlinear-elastic experimental and MD results. However, it is still found inaccurate when the NW cross-sectional size is relatively small. Such inaccuracy is also observed for the Euler-Bernoulli beam theory augmented with both contributions from surface effect and axial extension effect. A comprehensive model for completely considering influences from surface stress, intrinsic stress, and axial extension is then proposed, which leads to good agreement with MD simulation results. It is thus concluded that, for NWs with a relatively small cross-sectional size, a simple consideration of surface stress effect is inappropriate, and a comprehensive consideration of the intrinsic stress effect is required.
Resumo:
Studies continue to report ancient DNA sequences and viable microbial cells that are many millions of years old. In this paper we evaluate some of the most extravagant claims of geologically ancient DNA. We conclude that although exciting, the reports suffer from inadequate experimental setup and insufficient authentication of results. Consequently, it remains doubtful whether amplifiable DNA sequences and viable bacteria can survive over geological timescales. To enhance the credibility of future studies and assist in discarding false-positive results, we propose a rigorous set of authentication criteria for work with geologically ancient DNA.
Resumo:
In natural waterways and estuaries, the understanding of turbulent mixing is critical to the knowledge of sediment transport, stormwater runoff during flood events, and release of nutrient-rich wastewater into ecosystems. In the present study, some field measurements were conducted in a small subtropical estuary with micro-tidal range and semi-diurnal tides during king tide conditions: i. e., the tidal range was the largest for both 2009 and 2010. The turbulent velocity measurements were performed continuously at high-frequency (50Hz) for 60 h. Two acoustic Doppler velocimeters (ADVs) were sampled simultaneously in the middle estuarine zone, and a third ADV was deployed in the upper estuary for 12 h only. The results provided an unique characterisation of the turbulence in both middle and upper estuarine zones under the king tide conditions. The present observations showed some marked differences between king tide and neap tide conditions. During the king tide conditions, the tidal forcing was the dominant water exchange and circulation mechanism in the estuary. In contrast, the long-term oscillations linked with internal and external resonance played a major role in the turbulent mixing during neap tides. The data set showed further that the upper estuarine zone was drastically less affected by the spring tide range: the flow motion remained slow, but the turbulent velocity data were affected by the propagation of a transient front during the very early flood tide motion at the sampling site. © 2012 Springer Science+Business Media B.V.
Resumo:
We analyze longitudinal data on innovative start-up projects and apply Lazear’s jack-of-all-trades theory to investigate the effect of nascent entrepreneurs’ balanced skills on their progress in the venture creation process. Our results suggest that those nascent entrepreneurs who exhibit a sufficiently broad set of skills undertake more gestation activities towards an operational new venture. This supports the notion that a balanced skill set is an important determinant of entrepreneurial market entry.
Resumo:
To many aspiring writer/directors of feature film breaking into the industry may be perceived as an insurmountable obstacle. In contemplating my own attempt to venture into the world of feature filmmaking I have reasoned that a formulated strategy could be of benefit. As the film industry is largely concerned with economics I decided that writing a relatively low-cost feature film may improve my chances of being allowed directorship by a credible producer. As a result I have decided to write a modest feature film set in a single interior shooting location in an attempt to minimise production costs, therefore also attempting to reduce the perceived risk in hiring the writer as debut director. As a practice-led researcher, the primary focus of this research is to create a screenplay in response to my greater directorial aspirations and to explore the nature in which the said strategic decision to write a single-location film impacts on not only the craft of cinematic writing but also the creative process itself, as it pertains to the project at hand. The result is a comedy script titled Gravy, which is set in a single apartment and strives to maintain a fast comedic pace whilst employing a range of character and plot devices in conjunction with creative decisions that help to sustain cinematic interest within the confines of the apartment. In addition to the screenplay artifact, the exegesis also includes a section that reflects on the writing process in the form of personal accounts, decisions, problems and solutions as well as examination of other author’s works.
Resumo:
In this paper we extend the ideas of Brugnano, Iavernaro and Trigiante in their development of HBVM($s,r$) methods to construct symplectic Runge-Kutta methods for all values of $s$ and $r$ with $s\geq r$. However, these methods do not see the dramatic performance improvement that HBVMs can attain. Nevertheless, in the case of additive stochastic Hamiltonian problems an extension of these ideas, which requires the simulation of an independent Wiener process at each stage of a Runge-Kutta method, leads to methods that have very favourable properties. These ideas are illustrated by some simple numerical tests for the modified midpoint rule.