956 resultados para Log penalty
Resumo:
The search engine log files have been used to gather direct user feedback on the relevancy of the documents presented in the results page. Typically the relative position of the clicks gathered from the log files is used a proxy for the direct user feedback. In this paper we identify reasons for the incompleteness of the relative position of clicks for deciphering the user preferences. Hence, we propose the use of time spent by the user in reading through the document as indicative of user preference for a document with respect to a query. Also, we identify the issues involved in using the time measure and propose means to address them.
Resumo:
The hydrolysis of cupric ion has been studied at various ionic strengths (0·01, 0·05, 0·1 and 0·5 M). The results are analyzed employing 'core + links' theory, log-log plot, normalization plot, and extrapolation method for obtaining the pure mononuclear curve. The stability constants of Cu2(OH)2++, Cu3(OH)4++, Cu(OH)+ and Cu(OH)2 have been reported.
Resumo:
We present a measurement of the transverse momentum with respect to the jet axis (kt) of particles in jets produced in pp̅ collisions at √s=1.96 TeV. Results are obtained for charged particles in a cone of 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737 GeV/c2. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.
Resumo:
We present a measurement of the transverse momentum with respect to the jet axis ($k_{T}$) of particles in jets produced in $p\bar p$ collisions at $\sqrt{s}=1.96$ TeV. Results are obtained for charged particles within a cone of opening angle 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737 GeV/c$^{2}$. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.
Resumo:
Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
An atmospheric radio noise burst represents the radiation received from one complete lightning flash at the frequency to which a receiver is tuned and within the receiver bandwidth. At tropical latitudes, the principal source of interference in the frequency range from 0.1 to 10 MHz is the burst form of atmospheric radio noise. The structure of a burst shows several approximately rectangular pulses of random amplitude, duration and frequency of recurrence. The influence of the noise on data communication can only be examined when the value of the number of pulses crossing a certain amplitude threshold per unit time of the noise burst is known. A pulse rate counter designed for this purpose has been used at Bangalore (12°58′N, 77°35′E) to investigate the pulse characteristics of noise bursts at 3 MHz with a receiver bandwidth of 3.3 kHz/6d B. The results show that the number of pulses lying in the amplitude range between peak and quasi-peak values of the noise bursts and the burst duration corresponding to these pulses follow log normal distributions. The pulse rates deduced therefrom show certain correlation between the number of pulses and the duration of the noise burst. The results are discussed with a view to furnish necessary information for data communication.
Resumo:
Magnetic susceptibility measurements were performed on freshly fallen Almahata Sitta meteorites. Most recovered samples are polymict ureilites. Those found in the first four months since impact, before the meteorites were exposed to rain, have a magnetic susceptibility in the narrow range of 4.92 ± 0.08 log 10-9 Am2/kg close to the range of other ureilite falls 4.95 ± 0.14 log 10-9 Am2/kg reported by Rochette et al. (2009). The Almahata Sitta samples collected one year after the fall have similar values (4.90 ± 0.06 log 10-9 Am2/kg), revealing that the effect of one-year of terrestrial weathering was not severe yet. However, our reported values are higher than derived from polymict (brecciated) ureilites 4.38 ± 0.47 log 10-9 Am2/kg (Rochette et al. 2009) containing both falls and finds confirming that these are significantly weathered. Additionally other fresh-looking meteorites of non-ureilitic compositions were collected in the Almahata Sitta strewn field. Magnetic susceptibility measurements proved to be a convenient non-destructive method for identifying non-ureilitic meteorites among those collected in the Almahata Sitta strewn field, even among fully crusted. Three such meteorites, no. 16, 25, and 41, were analyzed and their composition determined as EH6, H5 and EL6 respectively (Zolensky et al., 2010). A high scatter of magnetic susceptibility values among small (< 5 g) samples revealed high inhomogeneity within the 2008 TC3 material at scales below 1-2 cm.
Resumo:
Two optimal non-linear reinforcement schemes—the Reward-Inaction and the Penalty-Inaction—for the two-state automaton functioning in a stationary random environment are considered. Very simple conditions of symmetry of the non-linear function figuring in the reinforcement scheme are shown to be necessary and sufficient for optimality. General expressions for the variance and rate of learning are derived. These schemes are compared with the already existing optimal linear schemes in the light of average variance and average rate of learning.
Resumo:
Introduction This case study is based on the experiences with the Electronic Journal of Information Technology in Construction (ITcon), founded in 1995. Development This journal is an example of a particular category of open access journals, which use neither author charges nor subscriptions to finance their operations, but rely largely on unpaid voluntary work in the spirit of the open source movement. The journal has, after some initial struggle, survived its first decade and is now established as one of half-a-dozen peer reviewed journals in its field. Operations The journal publishes articles as they become ready, but creates virtual issues through alerting messages to “subscribers”. It has also started to publish special issues, since this helps in attracting submissions, and also helps in sharing the work-load of review management. From the start the journal adopted a rather traditional layout of the articles. After the first few years the HTML version was dropped and papers are only published in PDF format. Performance The journal has recently been benchmarked against the competing journals in its field. Its acceptance rate of 53% is slightly higher and its average turnaround time of seven months almost a year faster compared to those journals in the sample for which data could be obtained. The server log files for the past three years have also been studied. Conclusions Our overall experience demonstrates that it is possible to publish this type of OA journal, with a yearly publishing volume equal to a quarterly journal and involving the processing of some fifty submissions a year, using a networked volunteer-based organization.
Resumo:
This article reports on a cross-sectional case study of a large construction project in which Electronic document management (EDM) was used. Attitudes towards EDM from the perspective of individual end users were investigated. Responses from a survey were combined with data from system usage log files to obtain an overview of attitudes prevalent in different user segments of the total population of 334 users. The survey was followed by semi-structured interviews with representative users. A strong majority of users from all segments of the project group considered EDM as a valuable aid in their work processes, despite certain functional limitations of the system used and the complexity of the information mass. Based on the study a model describing the key factors affecting end user EDM adoption is proposed. The model draws on insight from earlier studies of EDM enabled projects and theoretical frameworks on technology acceptance and success of information systems, as well as the insights gained from the case study.
Resumo:
Triggered by the very quick proliferation of Internet connectivity, electronic document management (EDM) systems are now rapidly being adopted for managing the documentation that is produced and exchanged in construction projects. Nevertheless there are still substantial barriers to the efficient use of such systems, mainly of a psychological nature and related to insufficient training. This paper presents the results of empirical studies carried out during 2002 concerning the current usage of EDM systems in the Finnish construction industry. The studies employed three different methods in order to provide a multifaceted view of the problem area, both on the industry and individual project level. In order to provide an accurate measurement of overall usage volume in the industry as a whole telephone interviews with key personnel from 100 randomly chosen construction projects were conducted. The interviews showed that while around 1/3 of big projects already have adopted the use of EDM, very few small projects have adopted this technology. The barriers to introduction were investigated through interviews with representatives for half a dozen of providers of systems and ASP-services. These interviews shed a lot of light on the dynamics of the market for this type of services and illustrated the diversity of business strategies adopted by vendors. In the final study log files from a project which had used an EDM system were analysed in order to determine usage patterns. The results illustrated that use is yet incomplete in coverage and that only a part of the individuals involved in the project used the system efficiently, either as information producers or consumers. The study also provided feedback on the usefulness of the log files.
Resumo:
We develop an alternate characterization of the statistical distribution of the inter-cell interference power observed in the uplink of CDMA systems. We show that the lognormal distribution better matches the cumulative distribution and complementary cumulative distribution functions of the uplink interference than the conventionally assumed Gaussian distribution and variants based on it. This is in spite of the fact that many users together contribute to uplink interference, with the number of users and their locations both being random. Our observations hold even in the presence of power control and cell selection, which have hitherto been used to justify the Gaussian distribution approximation. The parameters of the lognormal are obtained by matching moments, for which detailed analytical expressions that incorporate wireless propagation, cellular layout, power control, and cell selection parameters are developed. The moment-matched lognormal model, while not perfect, is an order of magnitude better in modeling the interference power distribution.
Resumo:
The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.
Resumo:
We characterize the optimal reserves, and the generated probability of a bank run, as a function of the penalty imposed by the central bank, the probability of depositors’ liquidity needs, and the return on outside investment opportunities.
Resumo:
The objective of this paper is to investigate and model the characteristics of the prevailing volatility smiles and surfaces on the DAX- and ESX-index options markets. Continuing on the trend of Implied Volatility Functions, the Standardized Log-Moneyness model is introduced and fitted to historical data. The model replaces the constant volatility parameter of the Black & Scholes pricing model with a matrix of volatilities with respect to moneyness and maturity and is tested out-of-sample. Considering the dynamics, the results show support for the hypotheses put forward in this study, implying that the smile increases in magnitude when maturity and ATM volatility decreases and that there is a negative/positive correlation between a change in the underlying asset/time to maturity and implied ATM volatility. Further, the Standardized Log-Moneyness model indicates an improvement to pricing accuracy compared to previous Implied Volatility Function models, however indicating that the parameters of the models are to be re-estimated continuously for the models to fully capture the changing dynamics of the volatility smiles.