Image Anal Stereol 2015;34:51-61 doi:10.5566/ias.1124 Original Research Paper ESTIMATION OF MINKOWSKI TENSORS FROM DIGITAL GREY-SCALE IMAGES ANNE MARIE SVANE Karlsruhe Institute of Technology, Institute of Stochastics, D-76128 Karlsruhe, Germany e-mail: amsvane@imf.au.dk (Received January 30, 2014; revised July 23, 2014; accepted July 25, 2014) ABSTRACT It was shown in Svane (2014b) that local algorithms based on grey-scale images sometimes lead to asymptotically unbiased estimators for surface area and integrated mean curvature. This paper extends the results to estimators for Minkowski tensors. In particular, asymptotically unbiased local algorithms for estimation of all volume and surface tensors and certain mean curvature tensors are given. This requires an extension of the asymptotic formulas of Svane (2014b) to estimators with position dependent weights. Keywords: digital stereology, grey-scale images, local algorithms, Minkowski tensors. INTRODUCTION Minkowski tensors (Schneider and Schuster, 2002; Huget al., 2008) are generalisations of Minkowski functionals (Matheron, 1975; Schneider, 1993). They associate to a compact convex body X . Rd a symmetric tensor, rather than a scalar. They carry information about shape features of X such as position, anisotropy, and eccentricity. For this reason they are used as shape descriptors in statistical physics. For instance, in Schr¨oder-Turket al. (2010b), Minkowski tensors are used to detect anisotropy in spherical bead packs (see also Schroder-Turk¨et al. (2011) for an overview). Since the data is often of digital nature, there is a need for fast digital algorithms to estimate tensors. In the special case of Minkowski functionals, local algorithms are often chosen for this (Klette and Rosenfeld, 2004) since they can be implemented based on a linear .ltering of the image (Ohser and M¨ucklich, 2000). The input is a black-and-white image, typically a thresholded grey-scale image. The idea is to count the number of times each n × ··· × n con.guration of black and white pixels occurs in the image and estimate the Minkowski functional by a weighted sum of con.guration counts. Since Minkowski tensors generally depend on the position of the underlying object, a local algorithm for these tensors would need to take the position of each n ×···× n con.guration into account. Such modi.ed algorithms are suggested in Schr¨oder-Turket al. (2008; 2010a). It is well known that local algorithms for Minkowski functionals based on black-and-white images are generally biased (Kampf, 2012; Svane, 2014a), even when the resolution goes to in.nity. The situation seems to be the same for most Minkowski tensors. For this reason, and because most black­and-white images arise as thresholded grey-scale images, local algorithms based directly on grey-scale images without thresholding were suggested in Svane (2014b) for Minkowski functionals. The existence of asymptotically unbiased algorithms for surface area and integrated mean curvature in this situation was also shown. In fact, only 1 ×···× 1 con.gurations are needed for this. The purpose of this paper is to extend the results to prove the existence of asymptotically unbiased estimators for surface and mean curvature tensors as well. Larger n ×···× n con.gurations are needed for this in order to gain information about surface normals. Moreover, position dependent weights are necessary in order to get information about position. The paper is organised as follows: We .rst describe the model for grey-scale images we shall be working with along with the generalised de.nition of local algorithms based on n ×···×n con.gurations of grey-values. In the next section, we show a slight extension of the known theoretical results about the asymptotic behaviour of local algorithms, which is required for the study of these algorithms. This follows fairly easily from the technical lemmas in Svane (2014b). We apply this to the estimation of Minkowski tensors. First we give their formal de.nition. In the subsequent sections, local estimators for volume, surface, and certain mean curvature tensors are constructed and it is shown that they are asymptotically unbiased, i.e., they converge when the resolution tends to in.nity and the point spread function (PSF) becomes concentrated near the boundary. In particular, we obtain a complete set of estimators for the Minkowski tensors in 2D. The algorithms require that the PSF is known; at least the knowledge of what a blurred halfspace looks like is required. Moreover, the resolution has to be suf.ciently high compared to the support of the PSF. Finally, we show by an example case how the formulas can be used to estimate all rank 2 tensors in 2D. It should be emphasised that the focus of this paper is only theoretical. We construct algorithms for estimation of Minkowski tensors and show convergence in an idealised setting. It is to our best knowledge the .rst algorithm based on grey-scale images without thresholding and the .rst to come with a convergence proof. Whether these algorithms perform well in .nite, but high, resolution, or when the digitisation model is only approximately valid, will be the aim of future research, see also the discussion section at the end of the paper. LOCAL ESTIMATORS FOR GREY­SCALE IMAGES GREY-SCALE IMAGES Let X . Rd be the compact set we are observing. We assume that the light coming from each point is spread out following a point spread function (PSF) which is independent of the position of the point. Hence the light that reaches the observer is given by the intensity function .X : Rd › [0,1] , where the intensity measured at x . Rd is given by .X (x)=.(x - z)dz . X In other words, .X is the convolution 1X * . of the indicator function 1X for X with a PSF .. The PSF is assumed to be a measurable function satisfying (i) . . 0. (ii)Rd .(z)dz = 1. We say that a PSF is rotation invariant if .(x)= .(|x|) depends only on |x|. A digital grey-scale image is the restriction of .X to an observation lattice L. A change of resolution corresponds to a change of lattice from L to aL for some a > 0. We assume that the precision of the measurements changes with resolution in such a way that the PSF corresponding to aLis -1 .a(x)= a-d.(ax), see the discussion in Svane (2014b) Section 2.1. The corresponding intensity function is denoted .X -d (x)=.a(x - z)dz = a.(a-1(x - z))dz . a XX In applications, the PSF is typically the Gaussian function (K¨othe, 2008) or the Airy disk (Airy, 1835). Another important example is .B = H d(B)-11B where B . Rd is a compact set of non-zero .nite volume H d (B). In this case, we measure at each z . L the fraction of z + B covered by X. Such PSF’s have compact support, but are not continuous. A BLURRED HALFSPACE For u . Sd-1 and . . R, write H- = {x . Rd |(x, u). .} .,u for the halfspace. The intensity function associated to a halfspace in standard resolution will play a special role in the following. Hence we introduce the separate notation H- .u(t) := .10,u (tu) . Note for later that H- H- 0,u 0,u .a (ax)= .1 ((x,u)u)= .u((x,u)) , independently of a. Example 1. If . is the standard Gaussian - 12 |x|2 .(x)=(2.)- d 2e, then 0 .u(t)=.(tu - z - su)dsdz = .(-t) , u.-. where . is the distribution function for the standard 1-dimensional normal distribution. A geometric interpretation of .u is illustrated in Fig. 1. Fig. 1. A grey-scale image of a halfspace. The function .u measures how the grey-values change along the horizontal line going from right to left. Image Anal Stereol 2015;34:51-61 LOCAL ALGORITHMS IN THE GREY­ SCALE SETTING Let L be a lattice in Rd spanned by the ordered yd basis v1,...,vd . Rd and let Cv = i=1[0,vi) be the fundamental cell of the lattice where . denotes Minkowski addition, see (Matheron, 1975). As we shall later be scaling the lattice, we may as well assume that the volume det(v1,...,vd ) of Cv is 1. For c . Rd , we let Lc = L+ c denote the lattice translated by c and aLc the scaling of Lc by a > 0. Fix w . L. A fundamental n ×···× n lattice block is Cwn ,0 =(w + nCv) . L. More generally we consider its translations Cwn ,z = z+Cwn ,0 by z . Rd . We denote by Cn [0,1]w,0 the set of nd -tuples of points in [0,1] indexed Cn by Cwn ,0. A point in [0,1]w,0 is written {.s}s.Cn . The w,0 restriction of .X to aCn naturally de.nes a point in aw,z [0,1]Cwn ,0 which we denote by .X (az;aCwn ,0) := {.X (az + as)}s.Cn . a a w,0 De.nition 2. A local algorithm .^qf is an estimator of where f : [0,1]w,0 × Rd › R is a Borel function. We the form .^q f (X) = aq . f (.X (az;aCwn ,0),az) ,a (1) z.Lc Cn assume that the support of f is contained in A × Rd where A . (0,1)Cwn ,0 is compact and that f is bounded on compact sets. The assumptions on f make the sum in Eq. 1 .nite and z › f (.X (az;aCwn ,0), z) integrable whenever X is a compact. We assume that the lattice is stationary random, i.e., we consider the lattice Lc = L+ c where c . Cv is uniform random. Then the mean estimator is E.^f (X)= aqE . f (.X (az;aCwn ,0), az) (2) qa z.Lc = aq-d Rdf (.Xa (z;aCwn ,0),z)dz . As a natural convergence criterion, we take the following: De.nition 3. A local algorithm .^qf (X) is called an asymptotically unbiased estimator for .(X) if lima›0 E.^qf (X)= .(X). THE RELEVANT SET-CLASSES In order to prove the formulas, we need to make some assumptions on X. First some notation. For a closed set X . Rd, we let exo(X) denote the points in Rd not having a unique nearest point in X. Let .X : Rd\exo(X) › X be the natural projection taking a point in Rd \exo(X) to its nearest point in X. We de.ne the normal bundle of X to be the set z-x N(X)=x, |z-x|. X × Sd-1 | z . Rd\(X . exo(X)), .X (z)= x. For (x,u) . N(X) we de.ne the reach . (X;x,u)= inf{t . 0 | x +tu . exo(X)} > 0 . Let H k denote the k-dimensional Hausdorff measure. Following Kiderlen and Rataj (2007), we introduce the class of gentle sets: De.nition 4. A closed set X . Rd is called gentle if: (i) H d-1(N(.X) .(B ×Sd-1)) < . for any bounded Borel set B . Rd. (ii) For H d-1-almost all x . . X there exist two balls Bin,Bout . Rd both containing x and such that Bin . X, int(Bout ) . Rd\X. The condition (ii) in the de.nition means that for a.a. x . .X there is a unique pair (x,u(x)) in N(X) with (x,u(x)),(x,-u(x)) . N(. X). This class is quite general, including for instance all C1 manifolds and all polyconvex sets satisfying a certain regularity condition (Kiderlen and Rataj, 2007). We shall also consider the subclass of r-regular sets: De.nition 5. A gentle set X . Rd is called r-regular for some r > 0, if the balls Bin and Bout exist for every x . . X and can be chosen to have radius r. Being r-regular is slightly weaker than being a C2 manifold. It can be proved (Federer, 1959), that if X is r-regular, then . X is a C1 manifold with H d-1­ a.e. differentiable normal vector .eld u. Thus its principal curvatures k1,...,kd-1, corresponding to the orthogonal principal directions e1,...,ed-1 . T . X where T . X is the tangent bundle, can be de.ned a.e. as the eigenvalues of the differential du. Hence the second fundamental form IIx on the tangent space Tx.X is de.ned for H d-1-a.a. x . . X. For a tangent vector .di=-11 .iei . Tx.X, IIx is the quadratic form whenever dxu is de.ned. In particular, the trace is Tr(II)= k1 + ···+ kd-1. Note for later that r-regularity ensures that k1,...,kd-1 . r-1. given by d-1 d-1 IIx . .iei = . ki(x).2 i i=1 i=1 The (d - 2)nd curvature measure of X is de.ned (Federer, 1959) for r-regular sets by Cd-2(X;A)= 1 Tr(II)dH d-1 2..X.A for all Borel sets A . Rd . ASYMPTOTIC FORMULAS FIRST ORDER FORMULAS The following notation will be used in the proofs. For a .nite set S, |S| denotes the cardinality of S. Given an interval I, we denote by IS the |S|-tuples {.s}s.S of points .s . I indexed by S. Given a .nite set S . Rd we write .X (x;S)= {.X (x + s)}s.S . [0,1]S , aa .u(t;S)= {.u(t + (s,u))}s.S . [0,1]S . For x . .X understood and u an outward pointing normal, we also write Hu := H- for the supporting (x,u),u halfspace. Note that H- .Hu 0,u (x + a(tu + s)) = .a (a(t + (s,u))u) a = .u(t + (s,u)) .u(t;S)= {.Hu (x + a(tu + s))}s.S. a The proofs follow from the following lemma shown in Svane (2014b), Lemma 7.1 and 7.2: Lemma 6. Suppose X is gentle and . is a bounded PSF. Let D > 0. Then for a.a. x . .X, lim sup{|.X (x + atu + as) - .u(t + (s,u))|, a a›0 t . [-D,D],s . B(D)} = 0, where B(D) denotes the ball in Rd of radius D. Theorem 7. Suppose X . Rd is a compact gentle set, S . Rd is .nite, and . is a bounded PSF. Let f : (0,1)S × Rd › R be continuous and assume that supp f . [ß,.]S × Rd for some ß ,. . (0,1). Then -1 lim af (.X (x; aS), x)dx a›0 Rda = f (.u(t; S),x)dtH d-1(dx) . .X R Proof. Let D > 0 be such that .(x)dx . ß . (1 - .) , |x|. D 2 D and S . B . This ensures that 2 supp f (.X (x;aS),x) . .X . B(aD) . a Then the generalized Weyl tube formula in Huget al. (2004), Theorem 2.1, yields d .(.X;x,u) f (.Xa (x;aS),x)dx = . m.m RdN(.X) 0 m=1 tm-1 f (.X (x +tu;aS),x +tu)dtµd-m(. X;d(x,u)) . a (3) Here .m is the volume of the unit ball in Rm and the µi are certain signed measures of locally .nite total variation. Observe that . (. X;x,u) tm-1 f (.X (x +tu;aS),x +tu)dt a 0 -1mDm . masup | f | , (4) so that dominated convergence together with Kiderlen and Rataj (2007), Eq. 8 yields d aD -1 tm-1 lim a. m.m a›0 N(. X) 0 m=1 f (.Xa (x +tu;aS),x +tu)dtµd-m(.X;d(x,u)) D = lim f (.X (x + atu;aS),x + atu)dt.Xa›0 -Da H d-1(dx) D = f (.u(t;S),x)dtH d-1(dx) . .X -D The last equation follows from Lemma 6 and continuity of f . Assume X is compact gentle and . bounded. Let A . (0,1)S be a compact set and g : Rd › R a continuous function. De.ne the measures on A given for any Borel set B . A by X,g -1 .X µ(B)= a1B (x + atu;aS) g(x)dx , a Rda and D µX,g(B)= 1B .u(t;S) dtg(x)H d-1(dx) . . X -D Image Anal Stereol 2015;34:51-61 Corollary 8. Let X be a compact gentle set and let A . (0,1)S a compact set. Let g : Rd › R be continuous and assume µX,g(.A)= 0. Then µaX,g converges weakly to µX,g. In particular, if h : A › R is continuous and f (.,x)= h(.)g(x), then D lim E.^f (X)= h(.u(t;S))dtg(x)H d-1(dx) . q a›0 . X -D Proof. For any bounded continuous h : A › R, X,gX,g hdµ› hdµ. a AA This follows from Theorem 7 by approximating h by continuous functions on (0, 1)S . NOTATION We next introduce some more notation that will be used in order to keep formulas short in the statement of the main second order theorem and its proof. Moreover, we state a technical lemma proved in Svane (2014b). We will assume . to be continuous and compactly supported. In this case all .u are C1 with (u,t) › .!(t) u continuous. We say that ß . (0,1) is a regular value if .!(t) < 0 for all t with .u(t)= ß and all u . Sd-1. u Since .u is decreasing, this ensures that .-1(ß) is u uniquely determined. For X . Rdr-regular, de.ne the quadratic approximation Qx to X at x . . X by Qx = {z . Rd |(z - x,u).-1 IIx(.u. (z - x))} 2 where .u. : Rd › u. denotes orthogonal projections. It is shown in Svane (2014b), in the proof of Lemma 7.6, that for s . Rd .Qx (x + a(tu + s)) = .u(t + (s,u))+ a.Qx (t,s)+ o(a) a (5) where .Qx (t,s)= - 1 IIx(z).(tu + s - z)dz . 2 u. Again we use the notation .Qx (t;S)= {.Qx (x + atu + s)}s.S , aa .Qx (t;S)= {.Qx (t,s)}s.S . Choose D as in the proof of Theorem 7. Given A . (0,1)S and x . . X, let t0 S = inf{t . [-D,D] | .u(t;S) . A} , tS = sup{t . [-D,D] | .u(t;S) . A} , 1 t0 S(a)= inf{t . [-D,D] | .Qx (t;aS) . A} , a t1 S(a)= sup{t . [-D,D] | .Qx (t;aS) . A} , a X,S t(a)= inf{t . [-D,D] | .X (x + atu;aS) . A} , 0 a X,S t(a)= sup{t . [-D,D] | .X (x + atu;aS) . A}. 1 a Finally, let .Qx (t0 S ,s) .0 S(x)= max-| s . S,tS = ts , 00 .! (t0 S + (s,u)) u .Qx (t1 S ,s) .1 S(x)= min-| s . S,tS = ts . 11 .! (t1 S + (s,u)) u Lemma 9. Suppose that X is r-regular and . is continuous with compact support. Let R > 0 and a .nite set S . Rd be given. For all a suf.ciently small, t › .X (x + a(tu + s)) a and t › .aQx (x + a(tu + s)) are decreasing functions for all x . .X, s . S, and t . [-R, R]. There is a constant M > 0 such that for . = 0,1 and a suf.ciently small sup .X (x + atu; aS) - .u(t;S) , (6) a x . .X,t . [-R,R]. Ma , X,S sup t(a) -tS | x . .X. Ma . (7) .. Assume that A =×s.S Is where Is are intervals and all points in . Is are regular values. Then for each x . . X and . = 0,1, sup .X (x + atu;aS) - .Qx (t;aS) | t . [-R,R], aa X,S(a) -tS t.. (a) , (8) are of order o(a) and t. S(a)= t. S + a.. S(x)+ o(a) . (9) Proof. The lemma is essentially proved in Svane (2014b). Note that the notation is changed. The .rst statement is proved in Lemma 7.5 for .X . The proof for .Qx is similar. Eqs. 6 and 7 are shown in the proof of Theorem 3.2 and 5.2. Eq. 8 follows from Lemma 7.7 and Eq. 9 from Lemma 7.6. SECOND ORDER FORMULAS Theorem 10. Suppose X is an r-regular set and . is continuous and compactly supported. Let S . Rd bea .nite set and A =×s.S Is where Is . (0,1) are closed intervals such that . Is consists of regular values for all s . S.Let f : A × Rd › Rbe C1. Then -2 lim af (.X (x;aS),x)dx a›0 Rda -1 -a-1 lim af (.X (x;aS),x)dx a›0 Rda tS = 1 tf (.u(t;S),x)dt Tr(IIx)H d-1(dx) . X t0 S tS + 1 .1 f (.u(t;S),x),.Qx (t;S) . Xt0S 0 +t .2 f (.u(t;S),x),udtH d-1(dx) + . (-1). f (.u(t. S;S),x).. S(x)H d-1(dx) . . X .=0,1 Here .1 ,.2 are the gradients of . › f (.,x) and x › f (.,x), respectively. Proof. For r-regular sets, the generalized Weyl tube formula reduces to dD tm-1 f (.Xa (x;aS),x)dx = a . Rd .X -D m=1 × f (.X (x + atu;aS),x + atu)dt sm-1(x)H d-1(dx) , a where sm(x) is the mth symmetric polynomial in the principal curvatures at x whenever these are de.ned. Again, Eq. 4 shows that dominated convergence applies to all terms with m . 2 and shows that all terms with m . 3 vanish asymptotically. For m = 2, consider D tf (.X (x + atu;aS),x + atu) -tf (.u(t;S),x) dt a -D . 2D2 sup |. f |sup .X (x + atu;aS) - .u(t;S) a X,SX,S + aD | t .t0 S ,tS .t(a),t(a) 10 1 X,SX,S + 2D sup| f |H 1([t0 S ,t1 S].[t(a),t(a)]) , 01 (10) where . denotes the symmetric difference. By Eqs. 6 and 7, the right hand side is of order O(a). For the m = 1 term, a similar argument shows that D f (.X (x + atu;aS),x + atu) - f (.u(t; S),x)dt a -D (11) is uniformly O(a). Hence another application of dominated convergence shows that it is enough to determine the limit of this for each x . .X. Another argument similar to that in Eq. 10 using Eq. 8 shows that D -1 lim af (.X (x + atu;aS),x + atu) a›0 -Da - f (.Qx (t;aS),x + atu) dt = 0 . a Thus it remains to compute D -1 lim af (.Qx (t; aS), x+atu)- f (.u(t; S),x) dt . a›0 -Da The integrand is uniformly bounded on G(a)= t0 S ,t1 S . t0 S(a),t1 S(a) by differentiability of f and another application of Lemma 9 (Eq. 6) with X replaced by Qx. Observe that 1G(a)(t) › 1 (t) t0 S ,tS 1 point-wise. Hence by dominated convergence and Eq. 5, -1 af (.Qx (t; aS), x + atu) - f (.u(t;S),x)dt a G(a) › t1 S .1 f (.u(t; S),x),.Qx (t;S) -t0S + t .2 f (.u(t;S),x),udt for a › 0. It remains to consider the integral over the sets t0 S(a) .t0 S ,t0 S(a) .tS andt1 S(a) .t1 S ,t1 S(a) .tS . 0 1 (12) The integral over the .rst interval is -1 t0 S a f (.Qx (t;aS),x + atu)+ f (.u(t;S), x) dt tS a 0 (a) -1 = t0 Sa f (.Qx (t;aS),x + atu) a t0 S+a.0 S(x) + f (.u(t;S),x)dt + o(1) , by Lemma 9, Eq. 9. Since |t - t0 S|. Ma for all t . [t0 S(a) .t0 S ,t0 S(a) .t0 S], Image Anal Stereol 2015;34:51-61 tS -1 0 af (.Qx (t;aS),x + atu) a t0 S+a.0 S(x) + f (.u(t;S),x) dt t0 S+a.0 S(x) = - a-1 f (.u(t0 S;S),x)dt + o(1) tS 0 = - .0 S(x) f (.u(t0 S;S),x)dt + o(1) . The second interval in Eq. 12 is treated similarly. ESTIMATION OF THE MINKOWSKI TENSORS MINKOWSKI TENSORS To a compact set X . Rd , one can associate the generalized curvature measures Ck(X;·) de.ned on . = Rd × Sd-1 for k = 0,...,d - 1, see (Schneider, 1993) in the case of poly-convex sets and (Federer, 1959) for sets of positive reach. An extension to general compact sets can be found in Huget al. (2004). Let Tp denote the space of symmetric tensors on Rd of rank p. Identifying Rd with its dual using the Euclidean inner product (·,·), one can interpret a symmetric p-tensor as a symmetric p-linear functional on Rd . Let xr denote the r-fold tensor product of x . Rd and for two tensors a and b, let ab denote their symmetric tensor product. For X . Rd , r,s . 0, and k = 0,...,d - 1 we associate the (r + s)-tensors 1 .d-k .r,sr k (X)= xusCk(X;d(x,u)) , r!s! .d-k+s . and for r . 0 we de.ne the volume tensors 1 .r,0 d (X)= xr dx . r! X These are the so-called Minkowski tensors introduced in McMullen (1997) (see also, e.g., (Schneider and Schuster, 2002; Huget al., 2008)). The Minkowski tensors satisfy the McMullen relations (McMullen, 1997) on convex sets, s.r-s,s .r-s,s-2 2. . = Q. k-r+sk-r+s ss where k . 0, r . 0, and Q is the metric tensor. All tensors in the sum that have not been de.ned above should be interpreted as 0. , .r,s Below we shall de.ne estimators for .r,0 dd-1, .r,0 and The McMullen relations show that in d-2. dimension d = 2, all tensors are linear combinations of multiples of these by powers of Q. Hence, in 2D we obtain a complete set of estimators for the Minkowski tensors. VOLUME TENSORS It is easy to see that the volume tensors can be estimated unbiasedly from black-and-white images even in .nite resolution just using a Riemann sum: d 1 r .^r,0(X)= az. dr! . z.aL.X If only a grey-scale image is given, one may threshold the image at level ß . (0,1) and apply this estimator. This yields the estimator .^r,0(X)= ad 1 . 1{.X (z).ß}zr . da r! z.aL This is asymptotically unbiased for all sets with H d-1(.X) < . since 1 E.^r,0(X)= zr1{.X (z).ß} dz , da r! Rd and |1{.X .ß }-1X |. 1.X.B(aD), where D is such that a .(z)dz . ß . (1 - ß ) . |z|.D SURFACE TENSORS In this section we de.ne local algorithms based on 2 × ···× 2 con.gurations for the surface tensors .r,s d-1(X). As in the case of volume tensors, the position dependent part is obtained by multiplying the weights by xr . To estimate the direction of surface normals, note that the grey-values change most rapidly in the normal direction while they are almost constant in the tangent direction. Thus the normal direction can be estimated locally from how fast the grey-values change along the lattice directions. This is the intuition behind the algorithm below and the reason why 2 × ···× 2 con.gurations of grey-values are needed. We assume that X is a gentle set. Hence the surface tensors take the form .r,sr d-1(X)= 12 xu(x)sH d-1(dx) . r!s! .s+1 .X Identifying Rd with its dual, it is enough to determine all evaluations on a basis v1,...,vd . Rd , 12 1 .r,s d-1(X)(vi1 ,...,vir+s )= r!s! .s+1 (r + s)! r × ..(x,vi.(k))). X ...r+sk=1 r+s × . (u(x), vi.(l) )H d-1(dx) , l=r+1 for all choices of i1,...,ir+s .{1,...,d} where .r+s is Corollary 11. Let X be a gentle set and suppose . the set of permutations of r + s elements. Hence it is satis.es Condition (i)–(iii). If f is as in Eq. 15, then enough to estimate 2(.(ß) - .(.)) lim E.^df -1(X)= rr+s .s+1r!s! a›0 12 )H d-1(dx) . k=1 (x,vik ) . l=r+1 (u(x),vil rr+s r!s! .s+1 .X )H d-1(dx) . . k=1 (x,vik ) . l=r+1 (u(x),vil × (13) .X for each tuple i1,...,ir+s. As basis we choose the vectors v1,...,vd spanning L. Let V = max{|vi|,i = 1...,d}. The estimation requires some assumptions on the PSF: (i) . is rotation invariant .(x)= .(|x|). In this case, .(t) := .u(t) is independent of u and Since . is strictly decreasing, (.(ß) - .(.)) > 0. Dividing by this factor thus yields an asymptotically unbiased estimator for (13). For r-regular sets, a formula for the .rst order bias is given by Theorem 10. Using 3 × ··· × 3 con.gurations instead, we can make the .rst order bias vanish. Let S = {0,±v1,...,±vd}. Cv3 ,0 where v = v1 + ··· + vd. Consider the weight function .u(t;S)= {.(t + (u, s))}s.S . (ii) . is strictly decreasing on .-1(0,1). In this case r 2 f {.s}s.S,x = 1A {.s}s.S . k=1 (x,vik )× .s+1r!s! r+s r+s the inverse exists on (0,1) and we denote this by .. . (.(.vil ) - .(.0)) + . (.(.0) - .(.-vil )) (iii)The lattice is so .ne compared to the support of . that .-1(0,1) contains an interval of the form [.(.) -V,.(ß)+V ] where 0 < ß < . < 1. This ensures that . is well-de.ned on the interval .([.(.) -V - .,.(ß )+ V + .]) for some small . > 0. Note that (i) and (ii) are satis.ed for both the Gaussian and the Airy disk. Under these conditions, observe that l=r+1 l=r+1 (16) where A =[ß,1 - ß] ×× .([ß -V - .,1 - ß +V + .]). s.S\{0} Then Theorem 10 yields: Corollary 12. Let X be an r-regular set and suppose . satis.es Condition (i)–(iii). If f is as in Eq. 16, then 2(.(ß) - .(.)) .s+1r!s! .X r E.^df -1(X)= . k=1 (x,vik ) .(.u(t + (vi, u))) - .(.u(t)) = (u,vi) . (14) for t . [ß,.]. Let S = {0,v1,...,vd}. C02 ,0 and r+s (u(x),vil )H d-1(dx)+ o(a) . . × l=r+1 A =[ß,.] ×× .([ß -V - .,. +V + .]) . s.S\{0} Remark 13. More generally, u is determined by its coordinates in the basis v1,...,vd given in Eq. 14. This De.ne the weight function can be used in a similar way to .nd estimators for integrals of the form r 12 f {.s}s.S,x = 1A {.s}s.S r!s! . k=1 (x,vik ) f (x,u(x))H d-1(dx) . .s+1 .X r+s . .(.vil ) - .(.0) . (15) Remark 14. Since Tr .0d,-21(X) × is just the surface l=r+1 area of X up to a constant factor, the above also This requires that . is known or, equivalently, the blurring of a halfspace .. Note that for a suf.ciently small, 1A {.X (z + as)}s.S reduces to .X (z) . [ß,.]. a a Applying Theorem 10 to the local estimator with weight function given by Eq. 15 yields: yields a new surface area estimator. Taking larger con.gurations into account than the surface area estimators in Svane (2014b), one may hope for a better precision. On the other hand, this new estimator requires more knowledge about the underlying PSF and is hence harder to apply in practice. 58 Image Anal Stereol 2015;34:51-61 Remark 15. It is known that asymptotically unbiased local surface area estimators from black-and-white images do not exist (Svane, 2014a). Tensors of the form .r,1 can be estimated, but in general, d-1 asymptotically unbiased local estimators for .r,s d-1 are not expected to exist for s > 0. MEAN CURVATURE TENSORS We can obtain estimators for tensors of the form .r,0 d-2 in a similar way. Again, the position dependence is obtained simply by multiplying the weights by xr . The remaining mean curvature tensors involving surface normals seem to be harder to get a hold of, since the dependence on surface normals in the asymptotic mean is more involved. 1 Let ß . 0, and let g : [ß,1 - ß ] › R be a C1 2 function satisfying g(x)= -g(1 - x). De.ne r f (.0,x)= g(.0)x. (17) This de.nes a local estimator .^df -2. Theorem 7 and 10 yield: Corollary 16. Suppose X is a compact r-regular set and . is continuous with compact support and satis.es Condition (i)–(ii). With f as in Eq. 17 lim E.^df -2(X)= 2.r!(c1 + c2 + c3).r,0 d-2(X) a›0 + r! .(ß) tg(.(t))dtQ.r-2,0(X) , d -.(ß) where the constants c1,c2,c3 . R are as in Svane (2014b) Section 6.2. This follows by rewriting the limit in Theorem 10 exactly as in Svane (2014b). The Q.r-2,0(X)-term d comes from the .2-term by an application of the divergence theorem. We have already shown how to .nd asymptotically unbiased estimators for volume tensors, so this can be corrected for. Estimators for which c1 +c2 +c3 0 are suggested = in Svane (2014b) Section 6.2. For instance, this is satis.ed for both g(.)=(. - 12 )1[ß,1-ß](.) and g(.)= 11 (.) - 1(.) and for suitable values [ß, 2 ][ 21 ,1-ß]of ß. RANK TWO TENSORS IN 2D As an example of how to use the formulas of this paper, we consider the estimation of the rank 2 Minkowski tensors in R2. These are at the moment the Minkowski tensors of positive rank that are most commonly used in shape analysis, see, e.g., Schr¨ oder-Turket al. (2008). By the McMullen relations, the essential ones are .0,0 k (X)Q, k = 0,1,2, .2,0 k (X), k = 0,1,2, .0,2 1 (X). All other rank 2 tensors are linear combinations of these. The .rst three reduce to the estimation of the intrinsic volumes .0,0(X)= Vk(X), see Schneider k (1993), which is already considered in Svane (2014b). Of the remaining four, the estimation of .0,2 is the 1 most involved, since it requires an estimate for the normal directions. We assume that the the unknown object X is observed on the standard lattice Z2, which is the case in most applications, and we assume the resolution to be a-1. Moreover, we take a common model for the . -d - 12 |x|2 PSF, namely the Gaussian .(x)= 2. e. As noted in Example 1, .u(t)= .(-t) and this function clearly satis.es (i)–(iii). In particular we have . = .-1 = -.-1. Since .-1(0,1)= R, we have a free choice of ß and . in (iii). In this example, we shall take ß = 1 - . = 0.1. This yields .-1(0.9)= -.-1(0.1) . 1.28. Assume that we want to estimate the tensor .0,2 d-1(X). For this we need to estimate each coordinate in the standard basis e1 =(1,0), e2 =(0,1). These are given for i, j .{1,2} by .0,2 = .0,2 d-1(X)ii d-1(X)(ei,ei) 1 = (u(x),ei)2H 1(dx), 4.. X and for i j, = .0,2 = .0,2 d-1(X)ij d-1(X)(e1,e2) 1 = (u(x),e1)(u(x),e2)H 1(dx) . 4..X To estimate .0d,-21(X)ij, we .rst measure the grey-value .(m,n) at each pixel indexed by a(m,n) . aZ2. To compute the contribution to the estimate from the pixel a(m,n) . aZ2, consider the 3 × 3 pixel square centered at this point. Take the grey-values in pixels directly above, under, and next to as shown in Fig. 2. · · · · a(m, n) · · · · · .(m,n+1) .(m-1,n) .(m,n) .(m+1,n) .(m,n-1) Fig. 2. A3 × 3 pixel square and the measured grey-values. Apply the weight function fi j to these values where f11 .(m,n),.(m+1,n),.(m,n+1),.(m-1,n),.(m,n-1) = 1 1[0.1,0.9](.(m,n)) 4.(.-1(0.9) - .-1(0.1)) × . (.-1(.(m+l,n)) - .-1(.(m,n)))2 , l=±1 f22 .(m,n),.(m+1,n),.(m,n+1),.(m-1,n),.(m,n-1) = 1 1[0.1,0.9](.(m,n)) 4.(.-1(0.9) - .-1(0.1)) × . (.-1(.(m,n+l)) - .-1(.(m,n)))2 , l=±1 f12 .(m,n),.(m+1,n),.(m,n+1),.(m-1,n),.(m,n-1) = 1 1[0.1,0.9](.(m,n)) 4.(.-1(0.9) - .-1(0.1)) × . (.-1(.(m+l,n)) - .-1(.(m,n))) l=±1 × (.-1(.(m,n+l)) - .-1(.(m,n))) , are the weight functions from Eq. 16. The estimator for .0,2 d-1(X)ij is then given by summing the contributions from all pixels: .^0d,-21(X)ij = a . (m,n).Z2 fi j(.(m,n),.(m+1,n),.(m,n+1),.(m-1,n),.(m,n-1)) . Since the remaining tensors .2,0, .2,0, and .2,0 21 0 do not depend on surface normals, only 1 × 1 con.gurations are necessary in the estimators. The respective (tensor-valued) weight functions are f 2,0 (.(m,n),a(m,n)) 2 2 a =(m,n)21[0.5,1](.(m,n)) , 2 2 a f 2,0 (.(m,n),a(m,n)) = 1 2(.-1(0.9) - .-1(0.1)) × (m,n)21[0.1,0.9](.(m,n)) , 2 a f 2,0 -1 (.(m,n),a(m,n)) = 2. c.(m,n) - 1 (m,n)2 02 × 1[0.1,0.9](.(m,n)) - Q1[0.5,1](.(m,n)) . Here the constant c is given by -.-1(0.1) c = 2! t .(-t)dt .-0.961 . .-1(0.1) The reason for this scaling is that for our PSF, the constant c = 2(c1 + c2 + c3) in the notation of Svane (2014b). The tensor (m, n)2 may also be interpreted as the 2 × 2 matrix (m,n)T (m,n). The estimator is again given by summing these weights over all lattice points: .^2,0 kf 2,0 (X)= a(.(m,n),a(m,n)) . k . k a(m,n).aZ2 Note that while the estimators with k = 1,2 converge to the true value, Corollary 16 does not show convergence for k = 0, since the Gaussian does not have compact support. DISCUSSION The paper shows that, contrary to black-and-white images, there is information enough in digital grey-scale images to extract some of the surface and mean curvature tensors based only on local information. This extends the known results for Minkowski functionals from Svane (2014b). The new .nding is that position dependent surface integrals can also be estimated by allowing the weight function to depend on position. Moreover, by considering 2 ×···× 2 con.gurations, it is possible to locally determine directions of surface normals. However, the new estimators are no longer simple. While the estimators for surface area could be based on an image thresholded at two different levels, detailed information about the PSF is needed for the surface tensors. The results of this paper are only asymptotic based on a limiting procedure in which the lattice distance and the PSF are shrunk at the same rate. Whether Image Anal Stereol 2015;34:51-61 or not this is realistic will depend on the way the blurring arises, see the discussion in Svane (2014b). For intrinsic volumes, only the convergence of the PSF was essential, but when taking larger con.gurations into account, an assumption on the convergence of resolution is necessary. It still remains to investigate how the algorithms work out in .nite resolution. For this, simulation studies could provide an idea about their performance. To apply the algorithms in practice would require a set-up that matches the assumptions. In particular, one needs to know the PSF or at least what a blurred halfspace looks like. The latter might be tested experimentally or by .tting a PSF. Moreover, the PSF has to be rotation invariant and the resolution needs to be high compared to the blurring, cf. Condition (iii). It is still unknown whether the lower Minkowski tensors can be estimated in a similar way -even in the basic case of Minkowski functionals. ACKNOWLEDGEMENTS The author was funded by a grant from the Carlsberg Foundation and hosted by the Institute of Stochastics at Karlsruhe Institute of Technology. The author would also like to thank Markus Kiderlen for helpful input and suggestions. REFERENCES Airy GB (1835). On the diffraction of an object-glass with circular aperture. Trans Cambridge Philos Soc 5:283– 91. Federer H (1959). Curvature measures. Trans Amer Math Soc 93:418–91. Hug D, Last G, Weil W (2004). A local Steiner-type formula for general closed sets and applications. Math Z 246:237–72. Hug D, Schneider R, Schuster R (2008). Integral geometry of tensor valuations. Adv Appl Math 41:482-509. Kampf J (2012). A limitation of the estimation of intrinsic volumes via pixel con.guration counts. WiMa Report 144. Kiderlen M, Rataj J (2007). On in.nitesimal increase of volumes of morphological transforms. Mathematika 53:103–27. Klette R, Rosenfeld A (2004). Digital geometry. San Francisco: Elsevier. K¨othe U (2008). What can we learn from discrete images about the continuous world? In: Coeurjolly D, Sivignon I, Tougne L, Dupont F, eds. Discrete Geometry for Computer Imagery, Proc DGCI 2008. Berlin: Springer. Lect Not Comput Sci 4992:4–19. Matheron G (1975). Random sets and integral geometry. Toronto: John Wiley & sons. McMullen P (1997). Isometry covariant valuations on convex bodies. Rend Circ Mat Palermo (2), Suppl. 50:259–71. Ohser J, M¨ucklich, F (2000). Statistical analysis of microstructures. Chichester: John Wiley & Sons. Schneider R (1993). Convex bodies: The Brunn–Minkowski theory. Cambridge: Cambridge University Press. Schneider R, Schuster R (2002). Tensor valuations on convex bodies and integral geometry, II. Rend Circ Mat Palermo (2), Suppl 70: 295314. Schr¨oder-Turk GE, Kapfer SC, Breidenbach B, Beisbart C, Mecke K (2008). Tensorial Minkowski functionals and anisotropy measures for planar patterns. J Microsc 238:57–74. Schr¨oder-Turk GE, Mickel W, Kapfer SC, Schaller FM, Breidenbach B, Hug D et al. (2010a). Minkowski tensors of anisotropic spatial structure. New J Phys 15:083028. Schr¨oder-Turk GE, Mickel W, Schr¨oter M, Delaney GW, Saadatfar M, Senden TJ et al. (2010b). Disordered spherical bead packs are anisotropic. Europhys Lett 90:34001. Schr¨GE, W, Kapfer SC, Klatt MA,oder-Turk Mickel Schaller FM, Hoffmann MJ et al. (2011). Minkowski tensor shape analysis of cellular, granular and porous structures. Adv Mater 23:2535–53. Svane AM (2014a). On multigrid convergence of local algorithms for intrinsic volumes. J Math Imaging Vis 49:352–76. Svane AM (2014b). Estimation of intrinsic volumes from digital grey-scale images. J Math Imaging Vis 49:148– 72.