825 resultados para tree structured business data
Resumo:
The business angel market is usually identified as a local market, and the proximity of an investment has been shown to be key in the angel's investment preferences and an important filter at the screening stage of the investment decision. This is generally explained by the personal and localized networks used to identify potential investments, the hands-on involvement of the investor and the desire to minimize risk. However, a significant minority of investments are long distance. This paper is based on data from 373 investments made by 109 UK business angels. We classify the location of investments into three groups: local investments ( those made within the same county or in adjacent counties); intermediate investments ( those made in counties adjacent to the 'local' counties); and long-distance investments ( those made beyond this range). Using ordered logit analysis the paper develops and tests a number of hypotheses that relate long-distance investment to investment characteristics and investor characteristics. The paper concludes by drawing out the implications for entrepreneurs seeking business angel finance in investment-deficient regions, business angel networks seeking to match investors to entrepreneurs and firms ( which are normally their primary clients), and for policy-makers responsible for local and regional economic development.
Resumo:
The development of methods providing reliable estimates of demographic parameters (e. g., survival rates, fecundity) for wild populations is essential to better understand the ecology and conservation requirements of individual species. A number of methods exist for estimating the demographics of stage-structured populations, but inherent mathematical complexity often limits their uptake by conservation practitioners. Estimating survival rates for pond-breeding amphibians is further complicated by their complex migratory and reproductive behaviours, often resulting in nonobservable states and successive cohorts of eggs and tadpoles. Here we used comprehensive data on 11 distinct breeding toad populations (Bufo calamita) to clarify and assess the suitability of a relatively simple method [the Kiritani-Nakasuji-Manly (KNM) method] to estimate the survival rates of stage-structured populations with overlapping life stages. The study shows that the KNM method is robust and provides realistic estimates of amphibian egg and larval survival rates for species in which breeding can occur as a single pulse or over a period of several weeks. The study also provides estimates of fecundity for seven distinct toad populations and indicates that it is essential to use reliable estimates of fecundity to limit the risk of under- or overestimating the survival rates when using the KNM method. Survival and fecundity rates for B. calamita populations were then used to define population matrices and make a limited exploration of their growth and viability. The findings of the study recently led to the implementation of practical conservation measures at the sites where populations were most vulnerable to extinction. © 2010 The Society of Population Ecology and Springer.
Resumo:
We propose a data flow based run time system as an efficient tool for supporting execution of parallel code on heterogeneous architectures hosting both multicore CPUs and GPUs. We discuss how the proposed run time system may be the target of both structured parallel applications developed using algorithmic skeletons/parallel design patterns and also more "domain specific" programming models. Experimental results demonstrating the feasibility of the approach are presented. © 2012 World Scientific Publishing Company.
Resumo:
Purpose: This paper investigates the link between two knowledge areas that have not been previously linked conceptually; stakeholder management and corporate culture. Focussing on the UK Construction Industry, the research study demonstrates mutual dependency of each of these areas on the other and establishes a theoretical framework with real potential to impact positively upon industry.
Design/methodology/approach: The study utilises both qualitative and quantitative data collection and then analysis to produce results contributing to the final framework. Semi-structured interviews were used and analysed through a cognitive mapping procedure. The result of this stage, set in the context of previous research, facilitated a questionnaire to be developed which helped gather quantitative values from a larger sample to enhance the final framework.
Findings: The data suggests that stakeholder management and corporate culture are key areas of an organisation’s success, and that this importance will only grow in future. A clearly identifiable relationship was established between the two theoretical areas and a framework developed and quantified.
Originality/value: It is evident that change is needed within the UK Construction Industry. Companies must employ ethical and social stakeholder management and manage their corporate culture like any other aspect of their business. Successfully doing this will lead to more successful projects, better reputation and survival. The findings of this project begin to show how change may occur and how companies might intentionally deploy advantageous configurations of corporate culture and stakeholder management.
Resumo:
A rapidly increasing number of Web databases are now become accessible via
their HTML form-based query interfaces. Query result pages are dynamically generated
in response to user queries, which encode structured data and are displayed for human
use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data
from query result pages is critical for web data integration applications, such as comparison
shopping, meta-search engines etc, and has been intensively studied. A number of approaches
have been proposed. As the structures of Web pages become more and more complex, the
existing approaches start to fail, and most of them do not remove irrelevant contents which
may a®ect the accuracy of data record extraction. We propose an automated approach for
Web data extraction. First, it makes use of visual features and query terms to identify data
sections and extracts data records in these sections. We also represent several content and
visual features of visual blocks in a data section, and use them to ¯lter out noisy blocks.
Second, it measures similarity between data items in di®erent data records based on their
visual and content features, and aligns them into di®erent groups so that the data in the
same group have the same semantics. The results of our experiments with a large set of
Web query result pages in di®erent domains show that our proposed approaches are highly
e®ective.
Resumo:
The highly structured nature of many digital signal processing operations allows these to be directly implemented as regular VLSI circuits. This feature has been successfully exploited in the design of a number of commercial chips, some examples of which are described. While many of the architectures on which such chips are based were originally derived on heuristic basis, there is an increasing interest in the development of systematic design techniques for the direct mapping of computations onto regular VLSI arrays. The purpose of this paper is to show how the the technique proposed by Kung can be readily extended to the design of VLSI signal processing chips where the organisation of computations at the level of individual data bits is of paramount importance. The technique in question allows architectures to be derived using the projection and retiming of data dependence graphs.
Resumo:
A bit-level systolic array system for performing a binary tree vector quantization (VQ) codebook search is described. This is based on a highly regular VLSI building block circuit. The system in question exhibits a very high data rate suitable for a range of real-time applications. A technique is described which reduces the storage requirements of such a system by 50%, with a corresponding decrease in hardware complexity.
Resumo:
A bit-level systolic array system for performing a binary tree Vector Quantization codebook search is described. This consists of a linear chain of regular VLSI building blocks and exhibits data rates suitable for a wide range of real-time applications. A technique is described which reduces the computation required at each node in the binary tree to that of a single inner product operation. This method applies to all the common distortion measures (including the Euclidean distance, the Weighted Euclidean distance and the Itakura-Saito distortion measure) and significantly reduces the hardware required to implement the tree search system. © 1990 Kluwer Academic Publishers.
Resumo:
In this paper we seek to show how marketing activities inscribe value on business model innovation, representative of an act, or sequence of socially interconnecting acts. Theoretically we ask two interlinked questions: (1) how can value inscriptions contribute to business model innovations? (2) how can marketing activities support the inscription of value on business model innovations? Semi-structured in-depth interviews were conducted with the thirty-seven members from across four industrial projects commercializing disruptive digital innovations. Various individuals from a diverse range of firms are shown to cast relevant components of their agency and knowledge on business model innovations through negotiation as an ongoing social process. Value inscription is mutually constituted from the marketing activities, interactions and negotiations of multiple project members across firms and functions to counter destabilizing forces and tensions arising from the commercialization of disruptive digital innovations. This contributes to recent conceptual thinking in the industrial marketing literature, which views business models as situated within dynamic business networks and a context-led evolutionary process. A contribution is also made to debate in the marketing literature around marketing's boundary-spanning role, with marketing activities shown to span and navigate across functions and firms in supporting value inscriptions on business model innovations.
Resumo:
Goal: This study assessed the degree to which services in south-central Ontario, Canada, were coordinated to meet the supportive care needs of palliative cancer patients and their families. Participants and method: Programs within the region that were identified as providing supportive care to palliative cancer patients and their families were eligible to participate in the study. Program administrators participated in a semi-structured interview and direct-care providers completed a survey instrument. Main results: Administrators from 37 (97%) of 38 eligible programs and 109 direct-care providers representing 26 (70%) programs participated in the study. Most administrator and direct-care respondents felt that existing services in the community were responsive to palliative care patients' individual needs. However, at a system level, most respondents in both groups felt that required services were not available and that resources were inadequate. The most frequently reported unmet supportive care need identified by both respondent groups was psychological/social support. Most administrator (69%) and direct-care (64%) respondents felt that palliative care services were not available when needed. The majority of administrator and direct-care respondents were satisfied with the exchange of patient information within and between programs, although direct-care staff identified a deficit in information transferred on palliative care patients' social/psychological status. Conclusions: The study demonstrated the value of a theory-based approach to evaluate the coordination of palliative cancer care services. The findings revealed that service programs faced significant challenges in their efforts to provide coordinated care. © 2009 Springer-Verlag.
Resumo:
Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.
Resumo:
Community structure depends on both deterministic and stochastic processes. However, patterns of community dissimilarity (e.g. difference in species composition) are difficult to interpret in terms of the relative roles of these processes. Local communities can be more dissimilar (divergence) than, less dissimilar (convergence) than, or as dissimilar as a hypothetical control based on either null or neutral models. However, several mechanisms may result in the same pattern, or act concurrently to generate a pattern, and much research has recently been focusing on unravelling these mechanisms and their relative contributions. Using a simulation approach, we addressed the effect of a complex but realistic spatial structure in the distribution of the niche axis and we analysed patterns of species co-occurrence and beta diversity as measured by dissimilarity indices (e.g. Jaccard index) using either expectations under a null model or neutral dynamics (i.e., based on switching off the niche effect). The strength of niche processes, dispersal, and environmental noise strongly interacted so that niche-driven dynamics may result in local communities that either diverge or converge depending on the combination of these factors. Thus, a fundamental result is that, in real systems, interacting processes of community assembly can be disentangled only by measuring traits such as niche breadth and dispersal. The ability to detect the signal of the niche was also dependent on the spatial resolution of the sampling strategy, which must account for the multiple scale spatial patterns in the niche axis. Notably, some of the patterns we observed correspond to patterns of community dissimilarities previously observed in the field and suggest mechanistic explanations for them or the data required to solve them. Our framework offers a synthesis of the patterns of community dissimilarity produced by the interaction of deterministic and stochastic determinants of community assembly in a spatially explicit and complex context.
Resumo:
Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.