_{1}

^{*}

We consider the problem of estimating an unknown density and its derivatives in a regression setting with random design. Instead of expanding the function on a regular wavelet basis, we expand it on the basis , a warped wavelet basis. We investigate the properties of this new basis and evaluate its asymptotic performance by determining an upper bound of the mean integrated squared error under different dependence structures. We prove that it attains a sharp rate of convergence for a wide class of unknown regression functions.

In nonparametric regression, it is often of interest to estimate some functionals of a regression function, such as its derivatives. For example, in the study of growth curves, the first (speed) and second (spurt) derivatives of the height as a function of age are important parameters for study (Muller [

where

Considerable research has been devoted to the subject of estimation, mainly the kernel methods, see, e.g., [

Recently a quite different algorithm is developed by Kerkyacharian and Picard [

The consideration of a random design with warped wavelet complicates significantly the problem and no wavelet estimators for derivative of regression function exist in this case. This motivates us to study the case under different dependence structures: the strong mixing case and the r-mixing case. Asymptotic mean inte- grated squared error properties for derivatives of regression function has been explored. In each case, we prove that warped wavelet estimator attains a fast rate of convergence. Another important advantage of the warped basis estimators is that they are near optimal in the minimax sense over a large class of function spaces for a wide variety of design densities, not necessarily bounded above and below as generally required by other wavelet estimators. Basically, the condition on the design refers to the Muckenhoupt weights theory introduced in Muckenhoupt [

The rest of the paper is organized as follows. Section 2 describes the warped wavelet basis and nonquispaced procedure. Optimality of the estimators will be presented in Section 3, while Section 4 contains proofs of the main results.

We aim to estimate derivative of regression function when

Condition 1. We define the m-th strong mixing coefficient of

We define

Applications on strong mixing can be found in [

Condition 2. Let

Let N be a positive integer. We consider an orthonormal wavelet basis generated by dilations and translations of a father Daubechies-type wavelet and a mother Daubechies type wavelet of the family db2N (see [

With appropriated treatments at the boundaries, there exists an integer

forms an orthonormal basis of

where

the Besov balls. We say

with the usual modifications if

where the coefficients are

and

We define the linear wavelet estimator

where

where

is a known function, continuous and strictly monotone from

It is clear that the above estimator is unbiased and we perform the following warped estimator:

In the case where g is unknown, we replace G wherever it appears in the construction by the empirical distribution of the X_{i}’s:

Let us define the new empirical wavelet coefficients:

Consequently we have the estimator:

This approach was initially introduced by Rao [

The main results of the paper are upper bounds for the mean inegrated square error of the wavelet estimator

Moreover, C denotes any constant that does not depend on l, k and n.

Proposition 4.1. Suppose that

Proof of Proposition 4.1. We have

So

where

and

For upper bound of

Using the same technic as [

Considering almost the same integral as in

It follows from (4.2), (4.3) and (4.1), that

Proposition 4.2. Suppose that the assumptions of Condition 1 hold. Let

Proof of Proposition 4.2. Observe that

where

and

It follows from the fact that

where

Using Proposition 6.1 in [

Therefore

Applying the Davydov inequality for strongly mixing processes (see [

Now we have

Hence by applying (4.7) and (4.8), we get

It follows from (4.5), (4.6) and (4.9) that

Now (4.10) with proposition 4.1 completes the proof.

Proposition 4.3. Suppose that the assumptions of Condition 2 hold. Let

Proof of Proposition 4.3. Having the same technique as in Proposition 4.2, we have

Applying the covariance inequality for r-mixing processes (see Doukahn [

We obtain from (4.2),

Hence by

So Proposition 4.3 is complete from (4.12) and (4.13).

Now based on the above Proposition, we have the following main result:

Theorem 4.1. Suppose that the assumptions of Section 2 hold. Let

where

Proof of Theorem 4.1. Since we set

As we define

First consider the i.i.d case. Using (4.2) and (4.3) and the fact that

Second, suppose the assumptions of Section 2 hold. Using Proposition 4.2 with

Remark 4.1. Theorem 4.1 shows that, under mild assumptions on the dependence of observations,

In this paper, we proposed a wavelet-based estimator for derivatives of regression function in the random design. The proposed estimator was formulated according to the warped basis which was simple and easy for applications. The results successfully revealed that without imposing too restrictive assumptions on the model, the wavelet-based estimator attained a sharp rate of convergence under strong mixing and ρ-mixing structures.

The author would like to express her gratitude to the referee and chief editor for their valuable suggestions which have improved the earlier version of the paper.

NargessHosseinioun, (2016) Estimation of Regression Function for Nonequispaced Samples Based on Warped Wavelets. Open Journal of Statistics,06,61-69. doi: 10.4236/ojs.2016.61008