Editorial Boards Informatica is a journal primarily covering intelligent systems in the European computer science, informatics and cognitive com­munity; scientifc and educational as well as technical, commer­cial and industrial. Its basic aim is to enhance communications between different European structures on the basis of equal rights and international refereeing. It publishes scientifc papers ac-ceptedbyat leasttwo referees outsidethe author’s country.Inad­dition, it contains information about conferences, opinions, criti­calexaminationsofexisting publicationsandnews. Finally,major practical achievements and innovations in the computer and infor­mation industry are presented through commercial publications as well as through independent evaluations. Editing and refereeing are distributed. Each editor from the Editorial Board can conduct the refereeing process by appointing two new referees or referees from the Board of Referees or Edi­torial Board. Referees should not be from the author’s country. If new referees are appointed, their names will appear in the list of referees. Each paper bears the name of the editor who appointed the referees. Each editor can propose new members for the Edi­torial Board or referees. Editors and referees inactive for a longer period can be automatically replaced. Changes in the Editorial Board are confrmed by the Executive Editors. The coordination necessary is made through the ExecutiveEdi-tors whoexamine the reviews, sort the accepted articlesand main­tain appropriate international distribution. The Executive Board is appointed by the Society Informatika. Informatica is partially supported by the Slovenian Ministry of Higher Education, Sci­ence andTechnology. Each author is guaranteed to receive the reviews of his article. When accepted, publication in Informatica is guaranteed in less than one year after the Executive Editors receive the corrected version of the article. Executive Editor – Editor in Chief Matjaž Gams Jamova 39, 1000 Ljubljana, Slovenia Phone: +38614773 900,Fax: +38612519385 matjaz.gams@ijs.si http://dis.ijs.si/mezi/matjaz.html Editor Emeritus AntonP. Železnikar Volariˇceva 8, Ljubljana, Slovenia s51em@lea.hamradio.si http://lea.hamradio.si/~s51em/ Executive Associate Editor -Deputy Managing Editor Mitja Luštrek, Jožef Stefan Institute mitja.lustrek@ijs.si Executive Associate Editor -Technical Editor DragoTorkar, Jožef Stefan Institute Jamova 39, 1000 Ljubljana, Slovenia Phone: +38614773 900,Fax: +38612519385 drago.torkar@ijs.si Contact Associate Editors Europe, Africa: Matjaz Gams N. and S. America: Shahram Rahimi Asia, Australia: Ling Feng Overview papers: Maria Ganzha,Wies awPaw owski, Aleksander Denisiuk Editorial Board Juan Carlos Augusto (Argentina) Vladimir Batagelj (Slovenia) Francesco Bergadano (Italy) Marco Botta (Italy) Pavel Brazdil (Portugal) Andrej Brodnik (Slovenia) Ivan Bruha (Canada) Wray Buntine (Finland) Zhihua Cui (China) Aleksander Denisiuk (Poland) Hubert L. Dreyfus (USA) Jozo Dujmovi´ c (USA) Johann Eder (Austria) George Eleftherakis (Greece) Ling Feng (China) VladimirA.Fomichov (Russia) Maria Ganzha (Poland) Sumit Goyal (India) Marjan Gušev (Macedonia) N. Jaisankar (India) Dariusz Jacek Jakczak (Poland) Dimitris Kanellopoulos (Greece) Samee Ullah Khan (USA) Hiroaki Kitano (Japan) IgorKononenko (Slovenia) MiroslavKubat (USA) Ante Lauc (Croatia) Jadran Lenarciˇˇ c (Slovenia) Shiguo Lian (China) Suzana Loskovska (Macedonia) Ramon L. de Mantaras (Spain) Natividad Martínez Madrid (Germany) Sando Martinciˇ´c (Croatia) c-Ipiši´Angelo Montanari (Italy) Pavol Návrat (Slovakia) Jerzy R. Nawrocki (Poland) Nadia Nedjah (Brasil) Franc Novak (Slovenia) MarcinPaprzycki (USA/Poland) Wies awPaw owski (Poland) Ivana Podnar Žarko (Croatia) Karl H. Pribram (USA) Luc De Raedt (Belgium) Shahram Rahimi (USA) Dejan Rakovi´ c (Serbia) Jean Ramaekers (Belgium) Wilhelm Rossak (Germany) Ivan Rozman (Slovenia) Sugata Sanyal (India) Walter Schempp (Germany) Johannes Schwinn (Germany) Zhongzhi Shi (China) Oliviero Stock (Italy) RobertTrappl (Austria) TerryWinograd (USA) Stefan Wrobel (Germany) Konrad Wrona (France) XindongWu (USA) Yudong Zhang (China) Rushan Ziatdinov (Russia&Turkey) A Review on CT and X-Ray Images Denoising Methods Dang N. H. Thanh Department of Information Technology, Hue College of Industry, Hue 530000 Vietnam E-mail: dnhthanh@hueic.edu.vn V. B. Surya Prasath Division of Biomedical Informatics, Cincinnati Children’s Hospital Medical Center, Cincinnati OH 45229 USA Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati OH 45267 USA Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati OH 45221 USA E-mail: surya.prasath@cchmc.org, prasatsa@uc.edu Le Minh Hieu Department of Economics, University of Economics, The University of Danang, Danang 550000 Vietnam E-mail: hieulm@due.udn.vn Overview paper Keywords: poisson noise, medical imaging, image processing, medical image processing, denoising Received: February 6, 2018 In medical imaging systems, denoising is one of the important image processing tasks. Automatic noise removal will improve the quality of diagnosis and requires careful treatment of obtained imagery. Com­puted tomography (CT) and X-Ray imaging systems use the X radiation to capture images and they are usually corrupted by noise following a Poisson distribution. Due to the importance of Poisson noise re­moval in medical imaging, there are many state-of-the-art methods that have been studied in the image processing literature. These include methods that are based on total variation (TV) regularization, wave­lets, principal component analysis, machine learning etc. In this work, we will provide a review of the following important Poisson removal methods: the method based on the modified TV model, the adaptive TV method, the adaptive non-local total variation method, the method based on the higher-order natural image prior model, the Poisson reducing bilateral filter, the PURE-LET method, and the variance stabi­lizing transform-based methods. Our task focuses on methodology overview, accuracy, execution time and their advantage/disadvantage assessments. The goal of this paper is to provide an apt choice of denoising method that suits to CT and X-ray images. The integration of several high-quality denoising methods in image processing software for medical imaging systems will be always excellent option and help further image analysis for computer-aided diagnosis. Povzetek: Pregledni članek opisuje metode za čiščenje slike, narejene z rentgenom ali CT. 1 Image denoising and noise removal with structure preser­vation is one of important tasks that are integrated in med­ical diagnostic imaging system, such as X-Ray, computed tomography (CT). X-ray and CT images are formed when an area under consideration of a patient is exposed under X-ray/CT and resulting attenuation is captured [1]. The noise density in these systems follows by the Poisson dis­tribution and well known as the Poisson noise, shot noise, photon noise, Schott noise or quantum noise. Although Poisson noise does not depend on temperature and fre­quency, it depends on photon counters. Poisson noise strength is proportional with the pixel intensity growth: Poisson noise at higher intensity pixel is greater than one at less intensity pixel [2]. Nowadays, digitization is an important technique to improve image quality in medical imaging systems and the Poisson noise characteristics needs to be considered to remove it effectively [1]. Because the Poisson noise is a type of signal dependent noises, applying the usual de-noising methods like for additive noises is ineffective, we need to design specific methods based on its characteris­tics. There are many approaches were used to remove the Poisson noise, including total variation, mathematical transforms (wavelets, etc.), Markov random field, princi­pal component analysis (PCA), machine learning etc. This paper mainly focuses on non-learning-based methods, learning technique is just a tiny part of this review that re­lates to the field of expert image prior model. The approach that has been widely studied in the past few year and earn many achievements is regularization by total variation. This approach based on the regularization that was developed long time ago. Rudin et al. [3] used the total variation regularization to remove noise on digital images. Basically, they minimized an energy functional based on L2 norm of image gradient with fixed constraint for noise variance. The proposed model was also known as ROF (Rudin-Osher-Fatemi) model. This work is well-known and was cited by tens of thousands of times. How­ever, the ROF model focuses on restoring images that are degraded by Gaussian noise. This model is ineffective to process Poisson noise: in the resulting image, the edge is not well preserved; if regularization strength is decreased, the noise in higher intensity-region still remains. To pass over those limitations of the ROF model, Triet et al. [2] proposed an improved version that can process the Poisson noise well. This model is known as modified ROF model (MROF). However, both of original methods that based on ROF and MROF create an effect: artificial artifacts [1]. The artificial artifacts on digital images are misrepresentations of image processing. This effect makes some regions of images get unnatural [4]. The artifacts have many types, such as: staircasing, star, halo etc. In medical imaging, these artifacts can cause doctors to mis­take for actual pathology. Usually, they need to learn to recognize these artifacts to avoid mistaking. So, during the processing, these artificial regions should not to be cre­ated. Prasath [1] proposed an adaptive version of MROF to remove this effect. This method is known as the adap­tive total variation method (ATV). A common problem of both MROF and ATV methods is ineffective to process on photon-limited image. To en­hance quality of this type of image in denoising process, Salmon et al. [5] proposed the non-local PCA method. Thereafter, Liu et al. [6] proposed another adaptive non­local total variation method (ANLTV). This method in­creases the information structure of image and gives the better denoising result on photon-limited images. Non-local approaches like ANLTV are state-of-the-art. However, if the local models are combined with train­ing process, we can get the result that is not inferior to other state-of-the-art non-local models. Wensen et al. [7] proposed a local variational model that incorporates the fields of expert prior image that is widely used in image prior and regularization model. This model is known as the higher-order natural image prior model (HNIPM). The HNIPM can remove Poisson noise on both high and low peak images. Although this model is local, since the model is trained on the Anscombe transform domain (very effec­tive for Poisson denoising), it is also a competitive model to compare to other state-of-the-art Poisson denoising models. However, above methods are performed on iteration and this requires more execution time to remove noise. Kirti et al. [8] proposed a spatial domain filter by modi­fying bilateral filter framework to remove Poisson noise. The Poisson reducing bilateral filter (PRBF) is non-itera­tive nature. So, it can treat Poisson noise faster than itera­tive based approaches. Another approach is highly expected –wavelet and its modifications. Thierry et al. proposed a denoising method based on image-domain minimization of Poisson unbiased risk estimation: PURE-LET (Poisson Unbiased Risk Esti­mation – Linear Expansion of Thresholds) [9]. This method is performed in a transformed domain: undeci-mated discrete wavelet transform and can be extended D.N.H. Thanh et al. with some other transforms. Zhang et al. [10] also pro­posed a multiscale variance stabilizing transform (MS­VST) that can be deemed as an extension of Anscombe transform. This transform also can be combined with wavelet, ridgelet, and curvelet [10]. Both PURE-LET and MS-VST are competitive relative to many existing de-noising methods, in which, the VST based methods are new research trend for CT and X-Ray images denoising [11] [12] [13] [14], because of using VST, Poisson noise can be treated as the additive Gaussian noise. Hence, re­searchers can reuse the existing Gaussian denoising meth­ods, that get many achievements and it is unnecessary to develop a partial denoising method to treat Poisson noise. Our paper is organized as follows: in Section 2, a de­tail about image formation on CT/X-Ray imaging systems and characteristics of Poisson noise are provided; in Sec­tion 3, methodology of Poison denoising methods are cov­ered shortly; Section 4 and Section 5 present the discus­sion about accuracy, performance, advantages/disad­vantages of methods and the conclusion. 2 Image formation in medical imag­ing systems and Poisson noise In CT and X-Ray imaging systems, to produce a radio­graphic image, X-Ray photons must pass through tissue and interact with an image receptor. The process of image formation is a result of differential absorption of the X-Ray beam as it interacts with the anatomic tissue [15]. Dif­ferential absorption is a process whereby some of the X-Ray beam is absorbed in the tissue and some passes through the anatomic part. Because varying anatomic parts do not absorb the primary beam to the same degree, anatomic parts composed of bone absorb more X-Ray photons than parts filled with air. Differential absorption of the primary X-Ray beam creates an image that structur­ally represents the anatomic area of interest. origincrop2 noisycrop2 ab Figure 1: The Poisson noise generation: a) The ex­pected noise-free image; b) The noisy image Poisson noise is a fundamental form of uncertainty as­sociated with the measurement of light, inherent to the quantized nature of light and the independence of photon detection [16]. Its expected magnitude is signal-dependent and causes the dominant source of image noise except in low-light conditions. Image sensors measure scene irradiance by counting the number of discrete photons incident on the sensor over a given time interval. Because of the photoelectric effect in digital sensors, photons are converted into electrons, whereas film-based sensors rely on photo-sensitive chem­ical reaction. Then, the random individual photon arrival leads to Poisson noise. Individual photon detections can be considered as in­dependent events that follow a random temporal distribu­tion. The photon counting is a Poisson process, and the number of photons ..measured by a given sensor element over a time interval .. is described by the discrete proba­bility distribution ..-....(....).. ..(..)= , ..! where ..– expected number of photons per unit time inter­val, which is proportional to the incident scene irradiance. Since the Poisson noise is derived from the nature of signal itself, it provides a lower bound on the uncertainty of measuring light. Any measurement would relate to Poisson noise, even under the ideal conditions of free-noise sources. When Poisson noise is the only significant source of uncertainty, as commonly occurs in bright pho­ton-rich environments, imaging is called photon-limited [16]. By the Poisson distribution, to reduce the Poisson noise, need to capture more photons. This requires longer exposures times or increasing the X-Ray intensity beam. However, the number of photons captured in a single shot is limited by the full well capacity of the sensor. Moreo­ver, increasing exposures times or photon intensity beam would be harmful for health of patients. Since this limita­tion of technology, it is necessary to reduce the Poisson noise by image processing algorithms. Figure 1 simulates the Poisson noise generation on image. We use the built-in imnoise function of MATLAB to generate the Poisson noise on skull image [17]. The Poisson noise in the higher intensity regions is greater than one of the lower intensity regions. Denoising methods on CT and X-Ray images 3.1 The modified ROF model Suppose that ..–a given grayscale image on .(a bounded open subset of R2, i.e. ..R2), ..–an expected denoising image that closely matches to observed image, ..= (..1,..2)..– pixels. By using total variation regularization, Triet et al. con­vert the Poisson denoising problem to the following mini­mization problem: ..=argmin (.(..-.......(..)).... .. . +...|...|....) (1) . where, ..>0– regularization parameter. To solve this problem, Triet et al. used the gradient descent method that replaces the regularization parameter by function that is suitable to process noise on image re­gions with both low and high intensity. This manner ex­actly suits the signal-dependent nature of Poisson noise. Informatica 43 (2019) 151–159 153 3.2 The adaptive Total variation method The adaptive total variation method is similar with above method. However, the second term in (1) is replaced by an adaptive total variation: ..=argmin (.(..-.......(..)).... 1+..|....*...| .. . where, ..(..)= +...(..)|...|.... ) . 1 , (2) .... – the Gaussian kernel for smoothing with .. variance, ..>0– contrast parameter, operator *is convolution. In order avoid staircasing artifacts, Prasath [1] pro­posed the generalized inverse gradient term incorporating to the local statistics with patches extracted from image. The detail about this term is presented below. Let ....,.. be the local region centered at .. with radius ... Consider the local histogram of a pixel .... and its corresponding cumulative distribution function [18]: |{.......,....|..(..)=..|}| ....(..)= , |....,....| |{.......,....|..(..)...|}| ....(..)= , |....,....| Where 0......, .. – maximum possible pixel value of the image, |•|–the number of elements of set (cardinality). The local histogram quantity to quantify local regions of given image is: .. ..(..)=. ....(..)..... 0 Finally, the adaptive weight in (2) is defined as: 1 ..(..)= 1+..(|....*...(..)|/..(..))2 The alternating direction method of multipliers [1] is provided to solve the problem (2). This iterative manner also gives good performance. 3.3 The adaptive non-local Total Variation method In the case of photon-limited image, a lot of useful struc­ture information of original image has been lost. So, the corrupted image is close to the binary image. If we only apply the denoising methods, such as the modified ROF model or the adaptive total variation, the denoising result is not really effective. For this type of images, firstly, we need to enhance image (improve light, contrast, etc.) and after that, per­form the denoising process. The method that Liu et al. [6] proposed is similar with above idea. For first step, they enhance the image detail by using Euler’s elastica. In second step, they remove noise by using non-local total variation to aim to preserve the structure information. The Euler’s elastica-based noise image enhancement model is proposed hereafter: ..=argmin (...-.......(..).... .. . 2 ... +...(..+..(.. ))|...|....) (3) . |...|where ..>0 – regularization parameter, ..>0,..>0 – weight parameters. The Poisson denoising model based on non-local total variation is provided as follows: ..=argmin (.(..-..)2.... .. . +...|.NL..|....), (4) . where 2 .|.NL..|....=...(..(..)-..(..)) ..(..,..)........ . .. – is non-local total variation, ..(..,..) – the non-local weight to measure the similarity of patches centered at the pixels ..and ... The denoised version will be restored from (4) by using an inverse Anscombe transform as bellow: 2 .. 3 ..=()- . 28 The alternating direction method of multipliers is also recommended to solve the models (3) and (4). 3.4 The higher-order natural image prior model The denoising method by the higher-order natural image prior model is based on the fields of expert image prior model that can be presented as follows: .... .. argmin ........((....*..)..)+..(..,..), (5) .. ..=1 ..=1 where .... – number of filters, .... – set of learned linear fil­ters with corresponding weights ....>0, .. – number of image pixels, ..(..)=ln(1+..2)– the potential function, (....*..).. – a convolution at pixel ... The first term is de­ rived from the fields of expert image prior model, the sec­ond term ..(..,..)is data fidelity that has various forms. By using model (5), Wensen et al. [7] proposed two models that were trained in various transform domains: the first model – is trained in the original image domain with the Poisson noise statistics derived data term; the sec­ond model – is trained in the Anscombe transform domain with a quadratic data term. The first model removes Pois­son noise on high peak images effectively, but it fails for low peak image. The reason is for the low peak image, there are large regions of image, in which, there are many pixels with zero intensity. This leads to those pixels with zero intensity cannot be updated and fixed at 0 in the iter­ations. Hence, noise still remains. The second model is powerful to remove noise for low peak images, but the quadratic data term is only effective to treat Gaussian noise. So, Wensen et al. combined the advantages of two models to make a novel model by replacing the quadratic data term in the second model by the Poisson noise statis­tics of data term in the first model. The resulting model D.N.H. Thanh et al. proved its own power to remove the Poisson noise for both cases of high and low peak images. The iPiano algorithm [19] is recommended to solve the resulting model. It is an efficient algorithm for non-convex optimization problems. 3.5 The Poisson reducing bilateral filter The bilateral filter was proposed by Tomasi et al. [20] to reduce additive Gaussian noise. This filter was developed based on the geometric and photometric distances in a lo­cal window. Kirti et al. [8] modified this filter by replac­ing the geometric distance by Poisson distribution. There­fore, the mean value is selected as mean of image intensity in a local window. For every mean value in the local win­dow, the expected value is estimated by the maximum likelihood estimation method. Since the Poisson reducing bilateral filter is non-iter­ative nature, its performance primarily depends on the maximum likelihood estimation method. 3.6 The PURE-LET method The PURE-LET (Poisson Unbiased Risk Estimation – Linear Expansion of Thresholds) method [9] is extended from SURE-LET method [21]. The PURE-LET method is used to reduce Poisson noise. Basically, this denoising method was proposed based on a minimization of Poisson unbiased risk estimation by using the linear expansion of thresholds (LET). Luisier et al. [9] proposed the PURE­LET to reduce Poisson noise without any priori hypothe­ses on noise-free image. The main goal of this proposed denoising method is a minimization of the mean squared error of the noise-free image and the denoised image. However, since the noise-free image is unknown, unbiased risk estimation was used that known as the Poisson unbiased risk estimation. This estimation was given in the unnormalized-Haar-discrete­wavelet-transform domain. In this estimation, an unknown image function used to replace for the noise-free image. To minimize this estimation, above unknow image function will be expressed in the linear expansion of thresholds. If elementary denoising functions are given, the minimization problem gets to be the problem of find­ing weight parameters in the linear expansion. Hence, the main task of this PURE-LET method focuses on solving a linear system of equations, in which the variables are weight parameters of the linear expansion. The linear expansion of thresholds can be presented in transformed domain, such as unnormalized wavelet trans­form, Anscombe transform and Haar-Fisz transform. Another important task in the PURE-LET methods is choosing a set of elementary denoising functions (or thresholding functions). These functions need to be satis­fied the following minimal properties: differentiability, anti-symmetry, linear behavior for large coefficients. The PURE-LET method is a competitive method to compare to other state-of-the-art Poisson denoising meth­ods. This method is also easy to be extended to treat other noises, such as Gaussian noise [22] [23] [24] [25], the mixed noise [26] [27] [28] [29] [30]. The method perfor­mance much depends on the performance of methods of solving linear system of equations, for example, the Gauss-Seidel method. 3.7 The multiscale variance stabilizing transform method The multiscale variance stabilizing transform method is proposed by Zhang et al. [10] to reduce Poisson noise on photon-limited image. This method is based on the vari­ance stabilizing transform (VST) that is incorporated within the multiscale framework offered by the undeci-mated wavelet transform (UWT). This transform is used because of its translation-invariant denoising. The de-noising task comes to finding coefficients of the mul­tiscale variance stabilizing transform. By using these co­efficients, we can estimate the noise-free image. The denoising method involves in the following steps: transformation –computation of UWT in conjunction with MS-VST; detection – detection of significant detail coef­ficients by hypotheses test; estimation – reconstruction of the final estimate by using the knowledge of the detected coefficients. Since the signal reconstruction requires in­verting the MS-VST-combined UWT, this reconstruction process is formulated as a convex sparsity-promotion op­timization problem. This optimization problem can be solved by many iterative methods, such as the iterative hy­brid steepest descent method. The MS-VST method can be combined with wavelet, as well as ridgelet (wavelet analysis in Radon domain) or curvelet. Further, this method can also to be extended to reduce other types of noise. 3.8 Adaptive variance stabilizing trans­form based methods The Poisson denoising methods by VST-based approach is often performed by three steps: applying the variance stabilizing transform, such as Anscombe transform; apply­ing the denoising methods to resulting image, in which the denoising methods are the one for additive Gaussian noise; using inverse transformation to denoised image to get the Poisson denoised image. Hence, VST-based methods can use state-of-the-art Gaussian denoising methods. By this idea, there are some very effective methods, such as BM3D [31], SAFIR [32], BLS-GSM [33]. For VST-based methods, the choice of inverse trans­formation is very important. Makitalo and Foi [11] pro­posed the optimal inverse Anscombe transform. The adap­tive variance stabilizing transform-based method of Makitalo et al. can be covered as follows: Step 1: Apply the Anscombe transform to Poisson noisy image to get asymptotically additive Gaussian noisy image. For ..– the observed pixel values obtained through an image acquisition device, the Anscombe transform is 3 ..(..)=2...+ ,..=(..1,…,....),..-pixelnumbers. 8 Step 2: Denoise the transformed images by additive Gaussian denoising method. Informatica 43 (2019) 151–159 155 Step 3: The denoising of ..(..)produces a signal ..that considered as an estimate of ..{..(..)..},..=(..1,…,....)– pixel values of denoising image, ..{.}– the mean. So, it is necessary to apply inverse transformation to .. to obtain the desired estimate of ... The inverse transformations can be used include: a) The exact Unbiased inverse +. 3 ......-.. I..(..)=2.(...+ . ). 8 ..! ..=0 b) The ML inverse I..(..), .......2.3/8 I....(..)={ . 0, ......<2.3/8 c) The MMSE inverse +. +. I........(..)=. ..(..|..)...... /. ..(..|..)...., -. -. 1 1 - (..-..{..(..)|..})2 where, ..(..|..)= e 2..2-the general­ .2....2 ized probability density function of ..conditioned on ... Another adaptive VST-based method that has high accuracy and performance to treat Poisson noise was pro­posed by Azzari and Foi [12]. This method is known as the iterative VST-based method. This method is also handled via three steps as above. However, in step 2, authors proposed another method to remove noise, but they did not use existing additive Gaussian denoising methods. The method is effective and has high performance because it exploited characteristics of Anscombe transformation. The algorithm starts by setting ..^0=... At each itera­tion ..=1,…,.., a convex combination needs to be com­puted: ....Ż=......+(1-....)..^..-1, where 0<....<1, ..^ – estimate of ... So, ..^..-1 can be treated as a surrogate for ..: ..{....|..}=..=..-.. 2......{....|..}, Where, ..{.},......{.}– the mean and variance respectively, and .... has higher SNR (signal-to-noise ratio) than .. for any ....<1. Apply a VST .... to .... and obtain an image ....=....(....), which can be denoised by a filter . for additive white Gaussian noise to get a filtered image .... =.(....). Assum­ing .... =..{....(....)|..}, the exact unbiased inverse of ...., I:..{....(....)|..}›..{....|..}=.., Will restore the original image: ..^..=I........(....). This process loops until ..=... The accuracy and performance of this method are competitive to other state-of-the-art Poisson denoising methods. 3.9 Other Poisson denoising methods Since the Poisson denoising problem has important role not only in medicine, but also in other fields, such material science, astronomy etc., beside above state-of-the-art de-noising methods, there are also many other denoising methods are highly assessed, such as: The adaptive BLS-GSM method of Li et al. [34]. They proposed this method based on Bayesian least squares method. Basically, this Poisson denoising method is a term of VST-based approach. The optimized anisotropic Poisson denoising method of Radow et al. [35]. This method is proposed based on variational approach and anisotropic regulariser in the spirit of anisotropic diffusion. This method can be consid­ered as a part of the total variation regularization. The Poisson denoising based on greedy approach of Dupe and Anthoine [36]. The goal of this method is com­bination of a greedy method with Moreau-Yosida regular­ization of the Poisson likelihood. The Poisson reduction based on region classification and response median filtering of Kirti et al [37]. Their con­tribution is usage of modified Harris corner point detector to predict noisy pixels and responsive median filtering in spatial domain. The primal-dual hybrid gradient algorithm [38] is a Poisson denoising method that should be also noticed. This method is based on total variation regularization and primal-dual hybrid gradient. So, this method has very good performance. 4 Discussion Firstly, we will discuss on the accuracy of Poisson de-noising methods. The MROF, ATV, ANLTV and HNIPM methods based on regularization, their accuracy is good enough to perform in medical imaging systems. Since the HNIPM method is trained on Anscombe transform do­main, regardless of its localization, its accuracy is compet­itive enough to other methods. If we combine the MROF, ATV, ANLTV methods with training process to select op­timal parameters in iterative manners, their accuracy might be so far better than the HNIPM method, especially, for the ANLTV method, because it does not change the information structure of image in denoising process. An effect that reduces the accuracy in denoising pro­cess is artificial artifacts. Almost of local methods usually create this effect. So, we need to perform some techniques to avoid adding artifacts to images, such in the case of the ATV method. For non-local methods, since the infor­mation structure of image is preserved, the artifacts will be seldom added. The PRBF method has the lowest accu­racy to compare to other denoising methods, including the PURE-LET, MS-VST and adaptive VST-based methods. When filter noise by PRBF, the hallo artifacts will appear in resulting images and the artifacts strength depends on filter parameters. Although we can control these parame­ters to reduce the hallo artifacts, it is very hard to select optimal values. There are some methods were developed to reduce this type of artifacts [39] [40], but it is still un­finished, especially, on Poisson noise reduction process by bilateral filter. For the PURE-LET, MS-VST and adaptive VST-based methods, the accuracy might be better than lo­cal variational based methods without training process, particularly, for the photon-limited images. However, the PURE-LET method is usually unstable. In our test, the de-noising result by the PURE-LET method is slightly differ­ent in every execution, regardless of unchangeable input D.N.H. Thanh et al. setting of parameters and configuration. When we com­pare the MS-VST method to the PURE-LET method, the MS-VST method has better accuracy, especially, for pho­ton-limited images [10]. The adaptive VST-based meth­ods have better accuracy and performance to compare to MS-VST method [11] [12]. Both the PURE-LET and MS­VST cause the artifacts. For the adaptive VST-based methods, appearance of the artifacts depends on selection of Gaussian denoising methods. Secondly, we focus on method performance by as­sessing the execution time. Poisson denoising methods, such as MROF, ATV, ANLTV, PURE-LET, MS-VST and adaptive VST-based methods are designed on iterative manner, so their execution time is longer than one of the PRBF method. The PRBF method is very fast and this is proven in processing large images. Execution time of both of PURE-LET, MS-VST and adaptive VST-based meth­ods also depends on computation time of transforms. Oth­erwise, for the PURE-LET method, it also depends on ex­ecution time of solving system of linear equations, and for the MS-VST method –depends on performance of method to solve convex optimization problem, such as the hybrid steepest decent method, and for adaptive VST-based methods – depends on performance of selective Gaussian denoising methods. For other methods: MROF, ATV, ANLTV, HNIPM, execution time much depend on perfor­mance of method to solve optimization problem (convex optimization for the MROF, ATV, ANLTV methods and nonconvex optimization for the HNIPM method). There were some methods are recommended in their proposed works to solve these optimization problems: the gradient descent method, the alternating direction method of mul­tipliers for convex optimization; iPiano for non-convex optimization. However, for the convex optimization, we can use other faster methods, such as: the primal-dual modified extragradient method, the primal-dual Arrow-Hurwitz method, the graph-cut method [41]. In work [41], Chambolle et al. showed comparison of execution time of above methods with the alternating direction method of multipliers. Among of these methods, the primal-dual Ar-row-Hurwitz method is the fastest, but proof of its conver­gence is open problem. The primal-dual modified extra-gradient method is certainly convergent and it is easy to parallelize on GPU. The graph-cut method is very fast and give exact discrete solutions, but an efficient paralleliza­tion on GPU is still open problem. For the non-convex op­timization, the iPiano method is state-of-the-art algorithm and fast enough to applied in this situation. Parallelization of the iPiano method is still open problem. Hence, the ex­ecution time problem of all above methods can be solved by combining with higher performance algorithms and/or parallel processing. Finally, about methodology, the MROF, ATV, ANLTV and PRBF methods are simple and easy to under­stand and easy to write program. The HNIPM is slightly more complex and requires the training process. Both of PURE-LET, MS-VST and adaptive VST-based methods are the most complex. They are performed in various transform domains. Their accuracy and performance also depend on calculation of these transforms. To choose suitable method for Poisson denoising in specific cases, we need to know their advantages and dis­advantages. These advantages and disadvantages are listed in Table 1 in terms of denoising capabilities of the re­viewed denoising methods here. After decades of de-noising research there are no universal denoising method even in the case of additive Gaussian noise. However, by concentrating on the state of the art denoising methods with emphasize of domain specific techniques will pave the way for choosing an optimal denoising method. We believe the overview of Poisson denoising methods based on mathematically well-defined techniques studied here can be used by researchers in developing and utilizing these in various domains. 5 Conclusion The denoising on CT/X-Ray images is still a challenge in medical image processing, especially, on the photon-lim­ited images. The state-of-the-art methods cannot solve simultaneously the following tasks: high accuracy on both photon-limited and photon-unlimited images, avoid add­ing artificial artifacts and the performance. The goal to de­velop an effective universal method that reduces multiple types of noise is even more difficult challenge. In this paper, we reviewed on the following methods: MROF, ATV, ANLTV, HNIPM, PRBF, PURE-LET and MS-VST. The PRBF is excellent choice if the execution time is the most important. However, if the accuracy is priority, non-local methods are recommended. If we need to process the photon-limited images, the ANLTV, MS­VST and adaptive VST-based methods are very good choices. If we want to exploit the existing Gaussian de-noising methods, we can use adaptive VST-based meth­ods, including MS-VST. During denoising process is performed, it is necessary to avoid adding artificial structures, and one can choose ATV or ANLTV methods that provide good denoising performance without introducing discernible artifacts. In this case, the VST-based methods can be used if they are combined to the image structure preservation Gaussian de-noising methods, such as BM3D [31], SAFIR [32] etc. By the research trend, the VST-based approach is a novel option by the criteria to create an “universal” method to remove multiple type of noises. This approach has potential if it is possible to expand the VST-based ap­proach to apply to other signal dependent noises. 6 References [1] V. B. S. Prasath, "Quantum Noise Removal in X-Ray Images with Adaptive Total Variation Regularization," Informatica, vol. 28, no. 3, pp. 505-515, 2017. http://dx.doi.org/10.15388/Informatica.2017.141 [2] L. Triet, C. Rick and J. A. Thomas, "A Variational Approach to Reconstructing Images Corrupted by Poisson Noise," Journal of Mathematical Imaging and Vision, vol. 27, no. 3, pp. 257-263, 2007. https://doi.org/10.1007/s10851-007-0652-y Informatica 43 (2019) 151–159 157 [3] I. R. Leonid, O. Stanley and F. Emad, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear phenomena, vol. 60, no. 1-4, pp. 259-268, 1992. https://doi.org/10.1016/0167-2789(92)90242-F [4] W. Zhifeng, L. Si, Z. Xueying, X. Yuesheng and K. Andrzej, "Reducing staircasing artifacts in spect reconstruction by an infimal convolution regularization," Journal of Computational Mathematics, vol. 34, no. 6, pp. 626-647, 2016. https://doi.org/10.4208/jcm.1607-m2016-0537 [5] S. Joseph, H. Zachary, D. Charles-Alban and W. Rebecca, "Poisson Noise Reduction with Non-local PCA," Journal of Mathematical Imaging and Vision, vol. 48, no. 2, pp. 279-294, 2014. https://doi.org/10.1007/s10851-013-0435-6 [6] H. Liu, Z. Zhang, L. Xiao and Z. Wei, "Poisson noise removal based on non-local total variation with Euler's elastica pre-processing," Journal of Shanghai Jiao Tong University, vol. 22, no. 5, pp. 609-614, 2017. https://doi.org/10.1007/s12204-017-1878-5 [7] F. Wensen, Q. Hong and C. Yunjin, "Poisson noise reduction with higher-order natural image prior model," SIAM journal on imaging sciences, vol. 9, no. 3, pp. 1502-1524, 2016. https://doi.org/10.1137/16m1072930 [8] V. T. Kirti, H. D. Omkar and M. S. Ashok, "Poisson noise reducing Bilateral filter," Procedia Computer Science, vol. 79, pp. 861-865, 2016. https://doi.org/10.1016/j.procs.2016.03.087 [9] F. Luisier, C. Vonesch, T. Blu and M. Unser, "Fast Interscale Wavelet Denoising of Poisson-corrupted Images," Signal Processing, vol. 90, pp. 415-427, 2010. https://doi.org/10.1016/j.sigpro.2009.07.009 [10] Z. Bo, M. F. Jalal and S. Jean-Luc, "Wavelets ridgelets and curvlets for Poisson noise removal," IEEE Transaction on image processing, vol. 17, no. 7, pp. 1093-1108, 2008. https://doi.org/10.1109/tip.2008.924386 [11] M. Markku and F. Alessandro, "Optimal Inversion of the Anscombe Transformation in Low-Count Poisson Image Denoising," IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 99-109, 2011. https://doi.org/10.1109/tip.2010.2056693 [12] A. Lucio and F. Alessandro, "Variance Stabilization for Noisy+Estimate Combination in Iterative Poisson Denoising," IEEE Signal Processing Letters, vol. 23, no. 8, pp. 1086-1090, 2016. https://doi.org/10.1109/lsp.2016.2580600 [13] M. Niklas, B. Peter, D. Wolfgang, M. V. Paul and B. Y. Andrew, "Poisson noise removal from high-resolution STEM images based on periodic block matching," Advanced Structural and Chemical Imaging, vol. 1, no. 3, pp. 1-19, 2015. https://doi.org/10.1186/s40679-015-0004-8 [14] A. S. Sid, Z. Messali, F. Poyer, R. L. Lumbroso-Le, L. Desjardins, T. C. D. Cassoux N., S. Marco and S. Lemaitre, "Iterative Variance Stabilizing Transformation Denoising of Spectral Domain Optical Coherence Tomography Images Applied to Retinoblastoma," Ophthalmic Research, vol. 59, pp. 164-169, 2018. https://doi.org/10.1159/000486283 [15] T. L. Fauber, Radiographic Imaging and Exposure, Missouri: Elsevier, 2017. [16] S. W. Hasinoff, "Photon, Poisson Noise," in Computer Vision: A Reference Guide, Boston, MA, Springer US, 2014, pp. 608-610. https://doi.org/10.1007/978-0-387-31439-6_482 [17] N. Veasey, "X-ray of skull showing brain and neurons," Getty Images. [18] V. B. S. Prasath and R. Delhibabu, "Automatic contrast parameter estimation in anisotropic diffusion for image restoration," in International Conference on Analysis of Images, Social Networks and Texts, p. 198-206, Yekaterinburg, 2014. https://doi.org/10.1007/978-3-319-12580-0_20 [19] O. Peter, C. Yunjin, B. Thomas and P. Thomas, "iPiano: Inertial Proximal Algorithm for Nonconvex Optimization," SIAM Journal on Imaging Sciences, vol. 7, no. 2, p. 1388–1419, 2014. https://doi.org/10.1137/130942954 [20] C. Tomasi and R. Manuchi, "Bilateral filtering for gray and color images," in Sixth International Conference on Computer Vision, Bombay, 1998. https://doi.org/10.1109/iccv.1998.710815 [21] B. Thierry and L. Florian, "The SURE-LET approach to image denoising," IEEE transaction on image processing, vol. 16, no. 11, pp. 2778-2786, 2007. https://doi.org/10.1109/tip.2007.906002 [22] V. B. S. Prasath, D. N. H. Thanh, N. H. Hai and N. X. Cuong, "Image Restoration With Total Variation and Iterative Regularization Parameter Estimation," in Proceedings of the Eighth International Symposium on Information and Communication Technology, pp. 378-384, Nha Trang, 2017. https://doi.org/10.1145/3155133.3155191 [23] V. B. S. Prasath and D. Vorotnikov, "On a System of Adaptive Coupled PDEs for Image Restoration," Journal of Mathematical Imaging and Vision, vol. 48, no. 1, pp. 35-52, 2014. https://doi.org/10.1007/s10851-012-0386-3 [24] V. B. S. Prasath and A. Singh, "A hybrid convex variational model for image restoration," Applied Mathematics and Computation, vol. 215, no. 10, pp. 3655-3664, 2010. https://doi.org/10.1016/j.amc.2009.11.003 [25] V. B. S. Prasath, D. Vorotnikov, P. Rengarajan, S. Jose, G. Seetharaman and K. Palaniappan, "Multiscale Tikhonov-Total Variation Image Restoration Using Spatially Varying Edge D.N.H. Thanh et al. Coherence Exponent," IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5220 -5235, 2015. https://doi.org/10.1109/tip.2015.2479471 [26] D. N. H. Thanh and S. D. Dvoenko, "A method of total variation to remove the mixed Poisson-Gaussian noise," Pattern Recognition and Image Analysis, vol. 26, no. 2, pp. 285-293, 2016. https://doi.org/10.1134/s1054661816020231 [27] D. N. H. Thanh and S. D. Dvoenko, "A Mixed Noise Removal Method Based on Total Variation," Informatica, vol. 26, no. 2, pp. 159-167, 2016. [28] D. N. H. Thanh and S. D. Dvoenko, "A Variational Method to Remove the Combination of Poisson and Gaussian Noises," in Proceedings of the 5th International Workshop on Image Mining. Theory and Applications (IMTA-5-2015) in conjunction with VISIGRAPP 2015, pp. 38-45, Berlin, 2015. https://doi.org/10.5220/0005460900380045 [29] D. N. H. Thanh and S. D. Dvoenko, "Image noise removal based on total variation," Computer Optics, vol. 39, no. 4, pp. 564-571, 2015. https://doi.org/10.18287/0134-2452-2015-39-4­ 564-571 [30] D. N. H. Thanh, S. D. Dvoenko and D. V. Sang, "A Denoising Method Based on Total Variation," in Proceedings of the Sixth International Symposium on Information and Communication Technology, pp. 223-230, Hue, 2015. https://doi.org/10.1145/2833258.2833281 [31] K. Makitalo and A. Foi, "On the inversion of the Anscombe transformation in low-count Poisson image denoising," in Workshop Local and Non-Local approximation Image Processing, pp. 26-32, Tuusula, 2009. https://doi.org/10.1109/lnla.2009.5278406 [32] J. Boulanger, J. B. Sibarita, C. Kervrann and P. Bouthemy, "Non-parametric regression for patch-based fluorescence microscopy image sequence denoising," in Fifth IEEE International symposium on Biomedical Imaging, Paris, 2008. https://doi.org/10.1109/isbi.2008.4541104 [33] J. Portilla, V. Strela, M. J. Wainwright and E. P. Simoncelli, "Image denoising using scale mixtures of Gaussian in the wavelet domain," IEEE Trans. Image Process., vol. 12, no. 11, pp. 1338-1351, 2003. https://doi.org/10.1109/tip.2003.818640 [34] L. Li, N. Kasabov, J. Yang, L. Yao and Z. Jia, "Poisson Image Denoising Based on BLS-GSM Method," in International conference on Neural Information Processing, pp. 513-522, Istanbul, 2015. https://doi.org/10.1007/978-3-319-26561-2_61 [35] G. Radow, M. Breub, L. Hoeltgen and T. Fischer, "Optimised Anisotropic Poisson Denoising," in Scandinavian conference on Image Analysis, pp. 502-514, Tromso, 2017. https://doi.org/10.1007/978-3-319-59126-1_42 [36] F. X. Dupe and S. Anthoine, "A greedy approach to sparse Poisson denoising," in IEEE International workshop on Machine Learning for Signal Processing (MLSP), pp. 1-6, Southampton, 2013. https://doi.org/10.1109/mlsp.2013.6661993 [37] T. Kirti, K. Jitendra and S. Ashok, "Poisson noise reduction from X-Ray images by region classification and response median filtering," Sadhana, vol. 42, no. 6, pp. 855-863, 2017. https://doi.org/10.1007/s12046-017-0654-4 [38] S. Bonettini and V. Ruggiero, "On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration," Journal of Mathematical Imaging and Vision, vol. 44, no. 3, pp. 236-253, 2012. https://doi.org/10.1007/s10851-011-0324-9 [39] X. Chuanmin and S. Zelin, "Adaptive Bilateral Filtering and Its Application in Retinex Image Enhancement," in Seventh International Conference on Image and Graphics (ICIG), pp. 45-49, Qingdao, 2013. https://doi.org/10.1109/icig.2013.15 [40] V. K. Nath, D. Hazarika and A. Mahanta, "Blocking artifacts reduction using adaptive bilateral filtering," in International Conference on Signal Processing and Communications (SPCOM), pp. 1-5, Bangalore, 2010. https://doi.org/10.1109/spcom.2010.5560517 [41] A. Chambolle, M. Novaga, D. Cremers and T. Pock, "An introduction to total variation for image analysis," in Theoretical Foundations and Numerical Methods for Sparse Recovery, De Gruyter, 2010, pp. 1-87. Property Method Ability of de-noising on photon-limited image Add artifacts Level of methodology Level of parallelization Level of execution time* MROF No Staircasing Easy Easy Immediate ATV No No Easy Easy Fast ANLTV Yes No Easy Easy Fast HNIPM No Staircasing Immediate Immediate Fast PRBF No Hallo Easy Unnecessary Very Fast PURE-LET No Star Hard Easy Fast MS-VST Yes Blocky Hard Easy Fast Adaptive VST-based methods Yes Yes/No† Hard Easy Fast Table 1: Advantages and disadvantages of Poisson denoising methods * Execution time by method that was used in their proposed works. † This depends on selection of Gaussian denoising methods. On thePropertiesof Epistemic andTemporal Epistemic Logicsof Authentication Sharar Ahmadi and MehranS.Fallah Departmentof Computer Engineering and InformationTechnology Amirkabir UniversityofTechnology(Tehran Polytechnic), HafezAve.,Tehran, Iran E-mail: sharar.ahmadi@aut.ac.ir, msfallah@aut.ac.ir Massoud Pourmahdian Department of Mathematics and Computer Science AmirKabir UniversityofTechnology(Tehran Polytechnic), HafezAve.,Tehran, Iran E-mail: pourmahd@ipm.ir Overview paper Keywords:epistemic logic, temporal epistemic logic, formal verifcation, authentication protocol Received: May 2, 2017 The authentication propertiesofa security protocol are specifed based on the knowledgegainedby the principals that exchange messages with respect to the steps of that protocol. As there are manysuccessful attacks on authentication protocols, different formal systems, in particular epistemic and temporal epis­temic logics, have been developed for analyzing such protocols. However, such logics mayfail to detect some attacks. To promote the specifcation and verifcation power of these logics, researchers may try to construct them in such a way that theypreserve some properties such as soundness, completeness, being omniscience-free, or expressiveness. The aim of this paper is to provide an overview of the epistemic and temporalepistemiclogicswhichareappliedintheanalysisof authenticationprotocolstofndouthowfar these logical properties may affect analyzing such protocols. Povzetek:Vpreglednem prispevkuje prestavljena epistemskain ˇ casovna epistemska logika overitvenega postopka z namenom izboljšave delovanja. 1 Introduction The principals communicating in a network need to be as­sured that they are sending/receiving messages to/from the intended principals as otherwise an attacker may imperson­ate an authorized principal andgain access to confdential information. To prevent this, the principals use authenti­cation protocols, which arebuilt on cryptography, forex-changing messages [13]. Since there are manysuccessful attacks on authentication protocols [47, 60, 49, 35, 37, 33], different formal sytems have been developed for analyzing such protocols. Manyof these systems are logical and are known as logics of authentication [14, 8, 7, 36, 38]. The frst formal system designated for the specifcation and verifcation of authentication protocols is an epistemic logic -called BAN [14]. Although BAN can safely ver­ify some protocols, it does not verify some other ones successfully, e.g., it proved that the Needham-Schroeder Public Key protocol (NSPK for short) was secure but later it was shown that NSPK was vulnerable to man-in-the-middle attack [46]. To promote the verifcation powerofBAN, someextensionsofithave beendeveloped [27,3,60,62,61,17,4]. Moreover,researchershavedevel-oped some other logicsof authentication that are notBAN-like,but are inherited from standard logics. Manyof these logics are epistemic and temporal epistemic ones that can model different runs of a protocol or can be applied to in­vestigate the knowledge acquired by principals at different instants in protocol runs [16, 45, 50, 52, 8, 53]. For ex­ample, a principal may fnd out who originated a received message at specifc step of a protocol run and may agree with the sender on the received information. There are also dynamic epistemic logics that are useful for modeling knowledge protocols, which model higher-order information and uncertainties in terms of agents’ knowledge about each other. However, since these logics are inconvenient in a cryptographic setting for generating equivalence relations among messages, we do not consider them in this paper [21]. Although the proposed epistemic and temporal epistemic logics have signifcantly improved the analysis of authen­tication protocols, every now and then a problem is found and we need to improve the logics to solve that problem. Forexample,an attackmaybe detectedbyan omniscience-free logic while it is ignored by another logic that is not omniscience-free. Similarly,an authentication protocol can be specifed by a temporal epistemic logic while it can­not be specifed by a logic whose modalities are only epis­temic ones. Such issues encourage researchers to fnd out if logics of authentication should preserve specifc logical properties. The properties that are usually discussed in this context are soundness, completeness, expressiveness, and being omniscience-free. Moreover, since a powerful at­tacker is traditionally modeled as the well-known Dolev-Yao message deduction system [24], it is valuable to see if these logics can model such a system. In this way, if a logic of authentication proves a security goal about an authentication protocol, one can trust that the result is in­deed valid in the presence of a powerful attacker who can eavesdrop all communications, drop, manipulate and re­play messages, and perform cryptographic operations using his knownkeys and messages. The aim of this paper is not to compare epistemic log­ics of authentication to alternative security protocol analy­sis, such as applied pi calculus and other process calculi, strands, multiset and other forms of rewriting. The aim of this paper is to provide an overview of the epistemic and temporal epistemic logics of authentication to fnd out how far some of their logical properties such as soundness, com­pleteness, being omniscience-free, andexpressiveness may affect analyzing authentication protocols.Todoso,wedis-cuss not only the conditions under which these logics sup­port the Dolev-Yao message deduction,but also the logical properties that encourage us to trust the derived judgements about the authentication protocols. The rest of the paper is as follows: In Section 2, we pro­vide an overview of the notions of cryptography, Kripke semantics, and epistemic logics of authentication. In Sec­tion 3, we compare epistemic and temporal epistemic log-icsof authenticationandshowhowfar someof their logical properties may affect them in analyzing authentication pro­tocols. Section4concludes the paper. 2 Basic notions Authentication protocols are rules built on cryptographic primitives that help principals authenticate each other while communicating in a hostile environment [13]. An authen­tication goal can be expressed in terms of a knowledge no­tion, e.g., the sender authentication can be read as “the re­ceiver knows the sender of a received message”. Consider the NSPK protocol shown in Figure 1. Every principal in this protocol hasa publickeyanda privatekey such that the publickeyof anyprincipal A is knowntoeveryonebut only A has the corresponding privatekey. In this protocol, principal A generates a nonce na, pairs na with its name A, encrypts na.A with principal B’s public-key pk(B) so that only B can decrypt it by his pri­vate key, and sends {na.A}pk(B) to B. By receiving this message, B decrypts it and sends na back along with his nonce nb in an encrypted message so that only A can de­crypt it. Then, A sends nb back to B. The goal of the NSPK protocol is that both A and B can be assured that they are talking to each other and not to an attacker. BAN S. Ahmadi et al. logic proved that the NSPK protocolwas safe [14], whereas Lowe showed that it was vulnerable to the man-in-the-middle attack [46]. Although such a result seems confus-ing,itis suggestedby the well-knownfact that the NSPK protocol is safe assuming that no compliant initiator will ever select a non-compliant responder for a session. Need­ham and Schroeder assumed thisfact about the principals. However, it was certainly no longer a reasonable assump­tion when cryptographic protocols were beginning to be used on the open internet and Lowe outlined the man-in­the-middle attack. The man-in-the-middle attack, shown in Figure. 1, con­sists of two interleaved sessions of the NSPK protocol. Af­ter A initiates a protocol run with I, the intruder I extracts the message, impersonates A, and sends na to B. When B replies, I forwards this message to A and misuses A to ob­tain nb. Then, I sends nb back to B. Thus, B is deceived to believe that he is talking to A whileheisinfact communi­cating with I. This attack shows that the result of analyzing the NSPK protocol usingBAN logicis questionable. Since the original BAN did not have formal semantics, fnding sucha semantics that could model the above attack became an important topic of research. As said earlier, the formal analysis of an authentication protocol using epistemic logics depends on the knowledge gained by the principals executing that protocol. There are two mainwaystoformalize suchknowledge. Assumethe statement: “B has sent m”, where the underlyingsemantics of a logic of authentication interprets this statement as fol­lows: “B isengaginginaneventofaprotocol sending mes­sage m”. If we formalize this statement with a logical for­mula ., A knows . means: “A knows that B has sent m". This is called propositional knowledge which is implicit and does not care about the details of computation [58]. There is also algorithmic knowledge formalizing the exact models of principals’ knowledge such that if a principal has some bit strings, he can applycryptographic operators to compute more strings using some predefned algorithms [30]. In this paper, we consider both of these knowledge formalizations,but frstweneedtoexplain some primitive notions. Assume that . isa setof principalsexchanging messages byexecuting an authentication protocol.We may usea log­ical language L to specify not only the steps of such a pro­tocol, but also the intended authentication properties that wewanttoprove aboutthat protocol.Todoso,weneedto formalize exchanged messages as message terms in L be­cause protocols are a type of messages passing multi-agent systems [26]. Amessage may be a plain term c or a com­pound one constructed by encryption or pairing such that {m}k is the encryption of message m with thekey k and m.m0 is the pairing of messages m and m0. Thereisa need foraderivationsystemtoderivenewmessagesfromknown ones using cryptographic functions. In this paper, we use the well-known Dolev-Yao message deduction system [24] as follows: m.m0 isa messageifandonlyifboth m and mare messages. If {m}k and k are messages, then so is m. Figure 1: NSPK protocol and the man-in-the-middle attack Finally, if m and k are messages, then so is {m}k. Given a set of message terms . and a fnite set of Dolev-Yao mes­sage deduction rules ., we say that m is derivable from . if either m . . or m is derivable from . by applying the rules in .. Assuming a set of message terms ., there are two interpretations for knowledge. The frst interpretation says that a principal i knows a formula . if he is aware of . and . is true in all the worlds he considers possible. In this case, a set of formulas, de­noted by Ai(w), is associated to every possible world w such that i is aware of every formula in Ai(w) [31]. The intuition behind such an interpretation is that a principal needs to be aware of a formula before he can know it. For instance, a principal i may be aware of an encrypted mes­sage {m}k thathe receives without beingawareof message m. In this way, i may know that he receives {m}k if this message holds in all the worlds that are possible to him while he may not know that he receives m. In the context of verifying security protocols, Ai(w) is implemented as an algorithm that says "YES" for the formulas that agent i isawareofin his local statein w. In this way, we say that i knows . explicitly using algorithmic knowledge [45]. The second interpretation says that a principal i knows a formula . implicitly, shown by an epistemic formula Ki., if i knows that . is true. The set F of L-formulas then, comprises not only atomic formulas about sending or re­ceiving messages,but also compound formulasbuilt induc­tively as follows: For every ., . .F, i . ., and m . ., we have . . ., ¬., Ki ., and Ai. are in F. The authentication protocols and goals formalized by formulas in F need to be interpreted in a proper formal semantics. Since an authentication protocol can be seen as a multi-agent system and it is known that an interpreted system 1 (IS for short) is a standard semantics for a multi­ 1Assume that . = {i1,...,in,e} is a set of principals such that “e” denotesa specifc principal called the environment.For each i . ., there is a fnite set Li of local states, a fnite set ai of local actions, and a local protocol pi : Li › 2ai . The transition relation ti : Li × a1 × ... × an › Li is then defned to return the next local state of i after all the principals perform their actions at the local state. Consider a set of global states G . L1 × ...Ln × Le, a set of joint actions a = n a1 × ... × an × ae, a joint protocol p : p1,...,pn,pe), and a global transition relation t =(t1,...,tn,te), which operates on global states by composing all local and environmental transition relations. An IS is n thenatuple G, I0, t, {~i}i.A,. ,whereG is the set of all global states accessible from anyinitial global state in I0 via the transition relation t. For eachi . ., there is an accessibility relation ~i. G × G such that agent system, authentication protocols can be modeled by interpreted systems too. This can build a foundation for constructing Kripke semantics for epistemic logics of au­thentication as follows [26]. AKripke model of an epistemic logic of authentication can refect an authentication protocol. Such a model has a set of possible worlds that can be defned as W = R × N, where R is the set of all runs of that protocol and N is the set of natural numbers. Thus, a pair hr, ni -called a point -represents a run r at a time instant t. Such a point can be associated to a set of formulas that hold (are true) in that point. A Kripke model is then a tuple of the form M = p W, {~i}i.A,. where W is the set of all possible points of the protocol. Moreover, the accessibility relation ~i can be interpreted in different ways. In one interpretation, for every w1,w2 . W , w1 ~i w2 holdsifandonlyifthe local statesofa principal i are the same in w1 and w2. For example, the local states of B at the end of both runs of the NSPK protocol shown in Figure. 1 are the same because B sends and receives the same messages by completing the execution of these two runs. In another interpretation, for every w1,w2 . W , w1 ~i w2 holds if and only if the local states of a princi­pal i are indistinguishable in w1 and w2. For example, as­sume that thereisa protocolPsuch that w1 = hr1,t1i and w2 = hr2,t2i are two possible worlds of a Kripke model that refectsP. Principals A and B participate in two runs ofP, denotedby r1 and r2, and formulas A sends {m}K and A sends {m0}k0 hold in w1 and w2, respectively. As­sume that B does not know the proper decryptionkeys to decrypt these messages, so he cannot distinguish formu­las A sends {m}K and A sends {m0}k0 because he sees {m}k and {m0}k0 as two random messages. In this way, he considers both of the formulas the same. If all of the other formulas that hold in w1 and w2 are equal, B cannot distinguish between w1 and w2 even if {m}k = 6{m0}k0 , i.e., we have: w1 ~B w2 2. In this model, Ki . is true 0 at w . W if . is true at every w. W that is accessible from w in A’s view. Moreover, Ai. is true in w . W if . canbe computedbyanawareness algorithm Ai in w. Such g ~i g0 if and only if li(g)= li(g0), where li : G › Li(g) returns i’s local state in the global state g, and . : G × Atom ›{true, false} is an interpretation function [26]. 2There are also some other interpretations for the accessibility relation. We refer the interested reader to Ref. [17, 8]. an algorithm is defned specifcally for every protocol and for computing intended formulas [30]. The truth of other non-atomic formulas is defned in a standard way and the atomic formulas are interpreted by the interpretation func­tion . [15]. p Assume that M = W, {~i}i.A,. is a Kripke model that modelsa protocolP, and AuthR is an authentication requirement formalized by a logical formula .. We say that . is satisfable with respect to M whenthere is a w . W such that . is true in w, i.e., AuthR holds in a run of P that is associated to w. We say that . is valid with respect to M if . is trueinevery w . W i.e. AuthR holds in all runs ofP.We discuss authentication and formalizing authentication in more detail below. 2.1 Formalizing authentication Most of the security protocols have been designated for at­taining authentication i.e. one principal should be assured of the identity of another principal. A protocol designer may assign different roles such as initiator, responder, or server to principals. Authentication protocols can be clas­sifed into two categories with respect to these roles: the protocolsthattryto authenticatea responderBtoan initia­tor A, and the protocols that try to authenticate an initiator Ato a responder B. The notion of authentication does not have a clear con­sensus defnition in the academic literature. However, the most clear and hierarchical defnition for authentication has been devised by Lowe. In this defnition, authentication requirements depend on the use to which the security pro­tocol is put. These requirements can then be classifed as aliveness, weak agreement, non-injective agreement, and agreement [46]. A protocol guarantees to a principal A “aliveness” of another principal B if the following condi­tion holds: whenever the initiator A completes a run of the protocol, apparently with the responder B, then B has pre­viously been running the protocol. Aliveness can be ex­tended to “weak agreement” if B has previously been run­ning the protocol with A. “Weak agreement" can be ex­tended to non-injective agreement on a set of data items (where V is a set of free variables of the protocol) if B has previously been running the protocol with A, B was acting as responderin his run, and the two principals agreed on the values of all the variables in V . Weak agreement can be extended to “agreement" if each such a run of A corre­sponds to a unique run of B [46]. There are manyattacks that occur due to parallel runs of a protocol [47]. The defnition of weak agreement for au­thentication guarantees a one to one relationship between the runs of two principals as follows: a protocol authen­ticates a responder to an initiator, whenever a principal A starts j runs of the protocol as an initiator and l runs as a responder all in parallel; and completes k . j runs of the protocol acting as initiator apparently with a responder B, then B has recently been running k runs acting as respon­der in parallel, apparently with A. Moreover, A protocol S. Ahmadi et al. authenticates an initiator toa responder, whenevera princi­pal B starts j runs of the protocol as a responder and l runs as an initiator, all in parallel; and completes k . j runs of the protocol acting as responder, apparently with initiator A, then A has recently been running k runs acting as ini­tiator in parallel, apparently with B [59]. In the following example, we explain the defnition of agreement in more detail. Example 2.1. Consider the following challenge-response protocol that aims to authenticate an initiator A to a responder B, and to authenticate a responder B to an initiator A. In this protocol, kab isa sharedkeybetween A and B. Moreover, na and nb are two noncesgeneratedby A and B, respectively. A › B : na B › A : {na}kab .nb A › B : {nb}kab There is the following refection attack on the protocol that consists of two sessions of the protocol executed in parallel. In this attack, B has the responder role and I(A) denotes an intruder who impersonates A: 1.I(A) › B : na 2.B › I(A): {na}kab .nb 10.I(A) › B : nb 0 20.B › I(A): {nb}kab .n b 3.I(A) › B : {nb}kab B starts two runs of the protocol as a responder to A, but it completes only one run (lines: 1, 2, and 3) withI(A) while A does not participate in these runs. So, the protocol fails to aim the agreement requirement. In the next example, we show how we can formalize an authentication requirement. Example 2.2. Consider the NSPKprotocol, showninFig­ure 1. We want to formalize the non-injective agreement authentication requirement. To do so, we use epistemic modalities as follows: KB KA msg na.nb This formula can be read as follows: “B knows that A knows the message na.nb". If this formula can be proven for the NSPK protocol, then we say that the protocol guar­antees “non-injectiveagreement”to B,where{na.nb} ap­pears as the set of data items that the two principals agree on their value. Since B encrypts nb with A’s public-key and sends {na.nb}pk(A) to A, whenever B receives a mes­sage containing nb, he concludes that A has previously been running the protocol with B because A is the only principal who has A’s privatekeyto decrypt {na.nb}pk(A) in order to extract nb. The man-in-the-middle attack de­ceives B to believe that he is talking to A while he is in fact talking to I, who is an intruder. AllBAN-like logics proved the above formula for the NSPK protocol, whereas an omniscience-free epistemic BAN-like logic, that is re­ferred to WS5 throughout this paper, could identify this in­sider attack[17]. In fact, being omniscience-free enabled WS5 to model the Dolev-Yao message deduction properly. We will explain this logic in detail at next sections. 3 Logical properties In this section, we investigate howfar some properties of epistemic and temporal epistemic logics of authentication may affect the analysis of authentication protocols. The properties that we investigate are soundness, completeness, being omniscience-free, and expressiveness. 3.1 Soundness and completeness Beside syntax and semantics, every logic may have a proof system X consisting of some axioms and rules where the axioms are valid with respect to the logic’s semantics and the rules preserve validity i.e. if the premise of a rule is valid, the result of it is also valid. Let X be a proof system of a logic of authentication that is based on the Dolev-Yao deduction system. Moreover,let the following statement be an authentication property: “principals i and j know that they are talkingto each other”, where i and j are engaging only in one session and both peers has received certain mes­sages as common knowledge to authenticate each other. In this case, proving . in X means that i and j know that they are indeed talking to each other even in an environment where there are attackers who can derive messages due to the Dolev-Yao deduction system. The proof system X may have some interesting proper­ties, two of which are soundness and completeness: X is sound if every derivable formula . in X is also valid. X is complete if every valid formula . is provable in X . This is also called “weak completeness”bysome researchers [15]. Logical analysis of security protocols relies on formal mod­els of cryptographywhere cryptographic operations and se­curity properties are defned as formal expressions. Such models ignore the details of encryption and focus on an abstract high-level specifcation and analysis of a system [1, 14, 24, 28]. Provingthe soundnessand completenessofalogicofau­thentication gives a strong intuition that the formal seman­tics of that logic is defned properly and it is working as expected. So, the logic can be applied safely in analyzing authentication protocols.For formalverifcationofa secu­rity protocol, there is a need for a formal model to refect that protocol appropriately i.e. there is a need for a sound formal model for that protocol. Using a logical model, the verifcation is then dependent on the following parameters: frst, the protocol must be described in the language of the logic. This description willbea partofa trust theory which consists of correct and acceptable propositions used in de­ducing security requirements. Even with a bad description Informatica 43 (2019) 161–175 165 of a protocol and its initial assumptions, the logic should consider all possible runs of that protocol. Todiscuss the second parameter for defningasound for­mal model, we frst provide an overview of the Dolev-Yao indistinguishability relation. The intuitive idea for defn­ing this relation is the fact that two messages are indis­tinguishable if any test -based on a limited set of opera­tions on messages -gives the same result aboutthe confg­uration ofthose messages. The Dolev-Yao indistinguisha­bility relation can be related to cryptographic computing models. In this case, a formal model is said to be com­putationally sound. This is expressed for static equiva­lence, which is a general form of indistinguishability, ex­plicitly: two local states are static equivalents if they sat­isfy the same equivalence tests. For a given theory of equation, static equivalence is based on a computable ef­fcient set of operations such as symmetric and asymmet­ric encryption and decryption. For example, consider the simplest equivalence theory satisfying an equation of the form dec(enc(m, pk), pr)= m, where pk, pr, and m are a public key, a private key, and a message, respectively. Moreover, enc is an asymmetric encryption operator that encrypts m with pk and dec is an asymmetric decryption operator that decrypts an encrypted message with pr [2]. There is also another parameter for defning a sound for­mal model. Assume that there is a specifcation of an au­thentication protocol P and some initial assumptions using a logic of authentication L. We need to show whatever is deduced in L about P should be consistent with what the principals involved in executions of P actually infer. As­sume that . is a fnite set of logical formulas including the specifcation of P and its initial assumptions. Moreover, assume that the desired security goal is formalized by a formula . that canbeprovenbyapplyingthe formualsin . and the axioms and rules of L’s proof system. If L is log­ically sound and M is its model that satisfes the formulas in ., M also satisfes .. If we show that M considers all possible runs of P , including those that attackers may participate in, the model refects P properly. As said earlier, the other theorem that is usually inves­tigated for every logic is completeness. All valid formu­las of a complete logic are also provable in its proof sys­tem. This motivates researchers to provide provers for the analysis of security protocols [22, 52, 51, 29]. If such a logic is also sound, the derived statements are more trusted since completeness shows that the formal semantics work asexpected. Completenessmaybearesultof another prop­erty.Asanexample,the completenessofBAN-like logics has been an open problem for many years because some of them do not have anyformal semantics and some other ones haveinaccurate formal semantics. So, the logics could not model some possible runs executed by a Dolev-Yao at­tacker. However, it has been shown that the completeness ofBAN-like logics canbe provenby presentinga formal semantics that avoids logical omniscience [18]. We will discuss logical omniscience in more details later. There is also another line of research that proves com­pleteness for monadic fragments of a frst-order temporal epistemiclogic with respect to their corresponding classes of quantifed interpreted systems. Such systems may have the following typical properties: synchronicity, perfect re­call, no learning, and unique initial state. In contrast to most of the logics of authentication, such a logic has some axioms and rules that explore the relationship between its time and knowledge modalities [5, 6]. 3.2 Logical omniscience The semantics of a logic of authentication can be defned based on the standard Kripke structure. Such a semantics may lead to the logical omniscience problem where princi­pals know all logical truths i.e. theyknow all consequences of what theyknow [31]. Infact, the problem bypasses the limitations placed on the knowledge of a principal who re­ceives an encrypted message but does not have the right keyto decrypt that message. Assume thatL is an epistemic logicof authentication, which hasa formal semanticsbuilt on the standard Kripke structure, and . is a set of logical formulas in L. Moreover, assume that a formula . can be derived from . in L’s proof system and an agent i knows all of the formulas in .. Then, the formal semantics of L leads to the logical omniscience where i knows .. Thisfact is an immediate result of the interpretation of the knowl­edge modality of L with respect to the underlying standard Kripke semantics. In this way, a formula Ki. is trueina state (possibleworld) w if and only if . is trueinevery state that is indistinguishable (accessible) from w in i’s view. For example, assume that the following formula is true in all possible runs of a protocol, i sent msg {m}k › submsg(m) where submsg(m) is read as m is a sub-message i.e. m is a sub-message of {m}k. Using the standard Kripke seman­tics and for every principal j, the following formula is also true in all runs of that protocol: Kj i sent msg {m}k › Kj submsg(m) Now, assume that the anonymity of i fails and the formula Kj i sent msg {m}k is valid. Therefore, Kj submsg(m) can be deduced by applying modus ponens. But in fact, this judgment is true only if j knows the symmetric key k to decrypt {m}k. Thus, logical omniscience should be avoided in order to restrict principals’ knowledge to what they can compute from their known facts, messages, and keys. There are different approaches for solving the logical omniscience problem in the analysis of security protocols. In Ref. [19], the problem is solved by presenting a gen­eralized Kripke semantics based on a permutation-based IS. Such a semantics results in a weakened necessitation rule fora logicfaithful toBAN. Hence the logic becomes an omniscience-free weakened S5. Such a logic formal­izes an implicit form of knowledge that results in abstract high-level reasoning of security protocols [17]. The logical S. Ahmadi et al. omniscience problem can also be avoided by exact mod­els of knowledge that a principal acquires during protocol runs. For example, such models are applied by a logic -calledTDL[45].So,apartofthelogicthatlinks epistemic modalities to awareness algorithms becomes omniscience-free.Inthisway,a principalknowsafactifheisawareof thatfact. The ideaof usingawareness algorithmsin formal securitywas originatedin Ref. [30].We will talk aboutthe logics that use such algorithms at the end of this section. 3.3 Expressiveness Epistemic logics of authentication usually have three dif­ferent operators besides the standard ones of propositional logics. These operators are temporal, epistemic, andaware­ness. Hybrid systems may also have some other operators suchastype operatorsor algebraic operators,butwedonot consider hybrid systems in this paper and refer the inter­ested reader to Ref. [39]. Temporal modalities formalize precedence of actions, time intervals, etc., such as “next” and “in a time interval [t1,t2]". Epistemic modalities for­malize the knowledge of principals. Awareness operators showthe algorithms that principals useto becomeawareof facts. Theexpressivenessof epistemic logicsof authentica­tion relies on their logical order and modalities. Moreover, ifa logic has temporal operators, itsexpressiveness also re­lies on the method that the epistemic core is augmented by temporal modalities. There are three approaches for adding temporal modali­ties to an epistemic core of a logic of authentication. The frst approach, which makes the resulting temporal epis­temic logic very expressive, is the fusion approach where epistemic and temporal modalities may appear in each other’s scope without any restrictions. Moreover, the re­sulting logic may have axioms and rules that explore inter­actions between time and knowledge [5, 8, 23, 4]. This approach has been used in developing logics applied in analyzing a wide range of security protocols. Two ex­amples of these protocols are classical authentication pro­tocols such as NSPK and stream authentication protocols such as TESLA [57], which is used for sending streams of messages: videos, audios, etc. The second approach is to use fbring technique where temporal and epistemic modalities may appear in each other’s scope without any restrictions. But, the time and knowledge dimensionsofa fbredlogicare orthogonal.So, such a logic has no axioms and rules to explore the rela­tionship between time and knowledge. Fibring technique does not model the knowledge which is obtained as a con­sequence of a particular event. Thus, a fbred logic is less intuitive for modeling security protocols and less expres­sivein comparisonwith otherlogicsbuiltonthe fusionap­proach. However, theorems such as soundness and com­pleteness may be easily proven for a fbred logic if its con­stituent logics are sound and complete. Moreover, a prover canbeeasily constructedfora fbredlogicifits constituent logics have provers. Such a logic has been developed for the analysis of the TESLA protocol [56]. Finally, the last approach for adding temporal modalities toan epistemic coreofalogicofauthenticationisusingthe temporalization technique. This technique operatesina hi­erarchical way such that the temporal modalities can never appear in the scope of epistemic modalities i.e. the result­ing logic does not have anyformula of the form Ki ., which can be read as follows: agent i knows that at the next step . holds. This approach has been used for verify­ing different protocols such as TESLA and WMF [53, 52]. In Figure 2, we compare some important epistemic and temporal epistemic logics of authentication against the above properties whereeveryrowofthe fgureis dedicated to a specifc logic. The 1st column of each row shows the logic name. The 2nd column shows if the logic or-deris “propositional”or “frst-order”, denotedby“PR”and “FO”, respectively. The 3rd column shows the type of op­erators used in the logic where “E”, “A”, and “T” denote “epistemic”, “awareness”, and “temporal”, in order. The 4th column shows if the logic has a proof system, model checker, tableau .... The 5–7th columns show if the logic is sound, complete, or omniscience-free, respectively. If a logic is sound, complete, or omniscience-free, we showthis bya “X” symbol. If anyof these properties does not hold, we show thisbya “×” symbol. If a logic does not have a proof system, it has no soundness and completeness theo­rems. In this case, we use a “-” symbol. Finally in the last column, “EXP ” denotes that the explicit part of the logic is omniscience-free. Some of the security protocols ana­lyzed by these logics are shown in Figure 3. The attacker models of these logics are also summarized in Figure 4. In what follows, we explain the above logical properties in more detail. 3.4 Inside the logics In 1989BAN logicwas proposed as the frst formal sys­tem for the specifcation and verifcation of authentication protocols [14]. Thisisasimple intuitivepropositional epis­temic logic named after its developers Burrows, Abadi, and Needham. The syntax ofBAN consists of inference rules about principals’ beliefs and their actions. This syntax en-ablesBAN not only to specify the steps of an authentica­tion protocol and its security goals,but also to derive the intended goals about that protocol. The frst step of ap­plyingBANisto idealizea protocolintoan abstraction.In the second step, one should translate the initial assumptions and the security goals intoBAN language, relate each ide­alizedstepofthe protocoltoaBAN formula,andthenuse BAN inference rules to derive intended goals. The soundness and completeness theorems cannot be proven forBAN because it has no formal semantics. This logic has only epistemic modalities and it is not expressive enough to specify or verify highly time-dependent proto­cols such as stream authentication protocols [29]. This, along with BAN’s propositional order, results in its low specifcation power. BAN has no formal attacker model Informatica 43 (2019) 161–175 167 but the capabilities of attackers are somehow embedded in its proof system. For example, BAN has a rule -called the message-meaning rule -which has two premises. Assume that P and Q are two agents, m is a message, k is a key, and {m}k denotes that m is encrypted with k. The frst premise of this rule says thatPbelieves that k is a shared key between P and Q. This is formalized as follows: P Believes P -k Q. The second premise says that some­one has sent a message which contains {m}k to P . This is shown as follows: P sees {m}k. The conclusion of this rule is that P believes that at some time Q sent a message which contains m. Thisis formalizedby the following for­mula: P Believes Q said {m}k. As k is a shared key between P and Q, only these two agents can use k to en­crypt m and no other agent can create {m}k. Thus, when P receives {m}k it concludes that at some time Q has sent a message which contains {m}k. In this way, even if an attacker has sent a message containing {m}k to P , P be­lieves that this message has not been originated by Q [14]. As anexampleof formalizing authenticationinBAN, we say that authentication is complete for the NSPK protocol if there are nonces na and nb, generated by A and B, re­spectively, such that the following statements hold: A Believes A -na.nb B B Believes A -na.nb B. The above formalizations can be classifed as non-injectiveagreement. However,other weaker formalizations can be presented too. As said earlier, BAN has some problems while verify­ing authentication protocols. Manyextensions of this logic have been developed to resolve its problems. One of these extensionsisGNYdevelopedin 1990forverifyinga wider range of authentication protocols. GNY emphasizes sepa­rating the content and meaning of messages while it follows the same method as BAN for formalizing authentication. This logic is named after its developers Gong, Needham, andYahalom. Although GNY canbe appliedinverifying a sample voting protocol successfully, the logic still suf­fers the same problems as its predecessor. Neither BAN nor GNY preserves the properties discussed in Subsections 3.1, 3.2, and 3.3. Thus, their derivations are not trustwor­thy. Moreover, these logics cannot analyze highly time-dependent protocols such as the TESLA protocol. The frst attempt for developing a formal semantics for BANwasin1991when AbadiandTuttleimprovedthesyn­taxand inference rulesofBANandalso presenteda formal semantics -called AT -for BAN [3]. This semantics is based on the standard Kripke structure constructed from interpreted systems. In this model, a principal i knows a formula . if we have: “i knows .”istrueata point hr, ki if it is true in every point hr0,k0i that is indistinguishable from hr, ki in i’s view. ThisextensionofBAN -calledAT logic -is sound with respect to theAT semantics. Thus, theproofsinthislogicare more trustworthy.However,the completeness ofAT remained an open problem for years Figure 2: Some Logics Applied in Analyzing Authentication Protocols Figure3: Some Protocols Analyzedby The Logicsin Figure2 due to the logical omniscience problem, which was an im­mediate consequence of its standard Kripkesemantics [19]. Because of the logical omniscience problem, the logic by­passes the principals’ restricted knowledge. Thus, AT is not enough for modeling different runs of a protocol. This logic followsBAN’s method for formalizing security prop­erties. However, it can also formalize such properties at specifc pointsof protocol runs.Forexample,wemaywant to verify whether a formula such as P -k Q is true ata specifc point hr, ti of a protocol run or not. The correctness of a security protocol highly depends on the evolving knowledge of principals communicating through the protocol steps while time is passing. In 1993, Syverson added temporal operators to BAN for the frst time. We call this logic TBAN throughout this paper [60]. TBAN couldverifyakeydistribution protocol that the pre-viousBAN-like logics could not because theylacked tem­poral modalities and ignoreda casual consistencyattack on theprotocol.TBANissoundwithrespecttotheATseman-tics. Moreover, it is able to formalize temporal modalities and statements. Thus, the specifcation power of TBAN is more than its predecessors. This logic was a starting point in using both temporal and epistemic modalities for ana­lyzing authentication protocols and later manyother logics followed this approach[53,4,23,22,52,56,8].Forexam­ple, we can formalize authentication for NSPK as follows where the symbol 2 is read as “always": 2 A Believes A -na.nb B 2 B Believes A -na.nb B. Although TBAN is more expressive than its prede­cessors, it cannot analyze such protocols as M-TESLA, Mix, and Dual Signature protocols [4, 17] since it is not omniscience-free. ContemporarytoTBAN,van Oorschot followed another lineof researchandextendedBANtofacilitatetheverif­cationofkeyagreement protocols [62]. Theextended logic was namedVO. AlthoughBAN, GNY, andVOhave proof Figure4: The Attacker Modelof The Logicsin Figure2 systems, we cannot analyze their soundness or complete­ness becausetheylack formal semantics. Allof theseBAN extensions were unifed into a sound logic -called SVO [61] -whose axioms and rules were simplifed. The com­pletenessofBANwasfnallyprovenin2007whenaproper proof system and formal semantics were provided for this logic. GNY, AT, VO, and SVO have no formal attacker model,butthe capabilitiesofthe attacker are somehow em­bedded in the proof system. So, the attacker model of these logicsis similarto thatofBAN. In 2005,itwas shown that theAT semantics could not identify some possible attacks because of the logical om­niscience problem. To solve this problem, Cohen and Dam provided a generalized Kripke semantics for BAN such thatBAN’s soundness, completeness, and decidabil­ity were proven [19, 18]. Using this semantics,BAN can be embedded into an S5 logic where some specifc message permutations are defned over messages. In this way, a for­mula Ki. is trueina possibleworld w if forevery possible world w0 of the model which is indistinguishable from w in i’s view and with respect to a message permutation ., .(.) is true in w0. The formula .(.) is the one in which every message m is replaced by .(m) [19]. Such a generalized Kripke semantics results in a weak necessitation rule forBAN.In thisway, the applicationof a weak necessitation rule along with axiomKdoes not lead to logical omniscience in the derivations. So, the under­lying semantics restricts the knowledge gained by an at­tacker. We call this logicWeakenedS5 -WS5 for short ­throughout the paper. WS5 can be extended by frst-order quantifers. The resulting logic is a sound and complete frst-order logic [20], denoted by FOWS5 in this paper. It is shown that these logics can safely specify and verify the Mix protocol since they are omniscience-free. However, theydo not have temporal modalities to analyze such pro­tocols as stream authentication. WS5 can also be extended by temporal modalities. The resulting logic is called TWS5. This logic successfully verifes a modifed TESLA protocol -called M-TESLA ­which cannot be analyzed by the previous temporal epis­temic logics that are not omniscience-free [4]. However, these logics make use of message permutation functions, which cause exponential run time [39]. Although WS5 and its extensions are sound, complete, and omniscience-free, the expressiveness of WS5 is less than its extensions since it is propositional and does not have temporal modalities. WS5, FWS5, and TWS5 use different symbols to for­malize security properties. While WS5 can use epistemic modalities, FWS5 and TWS5 can make use of quantifers and temporal modalities along with epistemic modalities, respectively. As an example, assume that we want to for-malizeamessage sending axiomin TESLA which says that if the sender S sends a message M to the receiver R, then R may receive this messagein time interval[u,v] [4]. This can be done as follows where nextu denotes u clock ticks later: S sends M › ( u R receives M) . ... . ( v R receives M). It is proven that the underlying generalized Kripke se­manticsofWS5 restrictsknowledgegainedbyan attacker because the message permutation functions make the logic omniscience-free. Infact, it is shown that the Dolev-Yao deduction system refects the semantics of WS5 implicitly because such a semantics considers all of the possible runs ofa protocolinthe formal modelby applyingthe permuta­tion functions on messages even those runs executed by a Dolev-Yao attacker [17, 4]. There are also manystandard logics that were not origi­nallydeveloped for analyzing authentication protocols,but they are well adapted to this purpose. One of these log­ics isLKX proposed by Halpern and Pucella in 2003 [30]. n This logic uses two kinds of knowledge: implicit knowl­edge which is similar to that of BAN-like logics and ex­plicit knowledge which links to some knowledge algo­rithms. The attacker model is defned based on the explicit knowledge inLKX . Infact,LKX defnes attackers as the nn Dolev-Yao deduction system explicitly using a Dolev-Yao knowledge algorithm.Wereferthe interestedreadertoRef. [30] to review the full algorithm. Explicit knowledge prevents logical omniscience. Thus, attackers infer the statements that they can compute. It has been proven that algorithmic knowledge can model the Dolev-Yao deduction system. This model is a useful abstraction because it does not consider the cryptosystem used in the protocol and it can easily capture probability for guessing appropriatekeys.LKX does not have a proof n system. Thus, it does not havesoundness and completeness theorems. However, a proof system may also be developed for it. This logic is fexible and modular. Moreover, it can be extended with probabilities to guess keys [25, 32] ac­cording to Lowe’s model for guessing attacks [49]. LKX n cannot verify highly time-dependent protocols because it has no temporal modalities. This logic also uses implicit knowledge to model principals’ beliefs about what is hap­pening during a protocol run. LomuscioandWozan followedthe same approachtode­velopatemporal epistemic logic called TDL for the specif­cationandverifcationof TESLA[45].Thelogicnotonly has traditional knowledge modalities but also has aware- S. Ahmadi et al. ness operatorsto representexplicitknowledge. Thus,apart of TDL that links to explicit knowledge is omniscience-free. The logic defnes attackers as the Dolev-Yao deduc­tion system explicitly. This is formalized through a deriva­tion relation that shows how an attacker can extract a mes-sagefromasetofreceived messagesandkeysusing admis­sible operations. As an example, there is a derivation rule 0 as follows: if an agent i derives m.musing the Dolev-Yao knowledge algorithm, denoted by an awareness oper­ator ADY , i can also derive m using this algorithm. TDL i has a computationally-grounded semantics. Moreover, it is intuitive, sound, complete, and decidable. TDL has a high specifcation power because it is a frst-order modal logic that has three types of modalities: epistemic, awareness, and temporal. In 2004, Dixon, Fernandez Gago, Fisher, and van der Hoek introduced a frst-order temporal logic of knowledge -called KLn -for the specifcation and verifcation of au­thentication protocols and verifed NSPK as a case study [23]. This logic is useful for reasoning about the evolving knowledge of a principal over time. Especially, this is im­portantifwewanttobe surethata principal obtains certain knowledgeatatime instant.KLn isafusionofalinear time temporal logic anda multi modal epistemic logic S5. Thus, the logic is powerful enough to specify agents’ capabili­ties explicitly by logical formulas. As an example of using quantifers and modalities of KLn for formalizing security requirements, consider the NSPK protocol as shown in Fig­ure 1. Assuming that the predicate value - nonce(N, V ) of the logic indicates that the value of nonce N is V , we want to show that every other principal, other than A and B, who are running the protocol can never know the value of A’s nonce. This is formalized as follows: . V 2¬KC value - nonce(Na,V ) where 2 and KC are two modalities of KLn that are inter­preted as “always" and “agentCknows", respectively. To provea security goal. from protocol specifcation ., where both . and . are formulas in KLn, we must prove . › .. To do so, a resolution method is applied which isinfacta refutation system showing that . .¬. is not satisfable.Aprototype theoremproverhasbeendeveloped fora single modalversionof temporal logicsofknowledge but the developers insisted on a need to develop a more powerful prover to deal with the multi-modal case in order to prove theorems automatically [23]. Contemporary to KLn, Liu, Ozols, and Orgun devel­oped the TML+ logic for analyzing authentication proto­cols [41]. The logic uses the temporalization technique to combine an epistemic logic -called TML -with a simple linear-time temporal logic -called SLTL. This technique allows adding SLTL to TML in a hierarchical way such that the temporal operators can never appear in the scope of epistemic ones.TML+canbeappliedfortheverifcationof time-dependent properties of security protocols. To do so, atrusttheoryforaprotocolthatistobeverifedisprovided. The theory consists of TML+ axioms and rules along with a specifcation of the protocol and initial assumptions in the form of TML+ formulas. Then, the theory is applied to drive security goals. TML+ is sound and complete, yet it is not omniscience-free. However, the logic assumes that an attacker cannot decrypt an encrypted message if he does nothave the rightkeyfor decryption. In 2007, a tableau system was developed for TML+ [51]. This was used for verifying both static and dy­namic aspects of some protocols such as WMF, Needham-Schroeder symmetric key, and Kerbros. The developers proved the soundness and completeness of the labelled tableau calculus based on the soundness and complete­ness results of the constituent logics of TML+. Then, they sketched a resolution-type proof procedure and proposed a model checking algorithm for TML+ [51]. In 2006, Orgun, Ji, Chuchang, and Governatori con­structed the FL logic for analyzing stream authentication protocols from TML and SLTL using the fbring technique [56]. In this way, FL is sound and complete with respect to its fbered model because its constituent logics are proven to be sound and complete [40, 34]. The original idea of the fbring technique is based on the assumption that both constituent logics are endowed with point-based semantics so that a model of the fbred logic is a point-based seman­tics and every point of the fbred model is closely related to a point in a model of each constituent logic. Thus, a model of a fbred logic can be related to several models in each of the constituent logics. As a consequence, a fbred logic preserves the theorems proven for its original logics [42]. The FL fbred model is such that the formulas of FL may contain anynumber of temporal and belief operators without anyrestrictions. Anyformula whose main opera-torisa temporal modality,isinterpretedby referringtothe SLTL semantics and anyformula whose main operator is a belief modality, is interpreted by referring to the Kripke semantics of TML. The fbring technique also allows developing proof pro­cedures from the constituent logics since the time and knowledge dimensions of FL are orthogonal[29]. In 2008, a modal tableau was developed for FL[29] by adapting KEM, a modal tableau system, showing how combinations of multi-modal logics can provide an effective tool for ver­ifying the TESLA protocol. It has been proven that the adapted KEM is sound and complete for SLTL and TML. As a result of the fbring technique, these theorems have also been proven for FL. KEM can be used to automatically check for formal properties of security protocols [29]. The logic FL is more expressive than a temporalised be­lief logic such as TML+ [52] because temporal and belief modalities may appear in each other’s domains. This logic is fexible because of its modular nature and it has a high specifcationpower sinceitisa frst-order logic making use of temporal and knowledge modalities. However, it is not omniscience-free because of the standard Kripke seman­tics of its belief aspect. In 2012, Ma et al. constructed a temporalized belief logic -called TBL [53] -that was less expressive than FL because of using the temporaliza- Informatica 43 (2019) 161–175 171 tion technique. However, it was appllied in verifying the TESLA protocol successfully. TBL is sound and complete, yet it is not omniscience-free. This logic follows FL to de­fne the attacker’s capabilities where they are defned simi­lar to the Dolev-Yao model. However, neither of these two logics prove if theycan model such attackers. In 2009, Boureanu, Cohen, and Lomuscio presented an effective fully automatic approach for analyzing security protocols in the presence of the Dolev-Yao attackers. This approach makes useofa temporal epistemiclogic, called CTLK [8]. The frst step is to specify a security proto­col in CAPSL [54], which is a high-level specifcation lan­guage formally describing protocols. Then, the specifca­tions of the security protocol and security goals in CAPSL are translated by an automatic compiler into an ISPL (In­terpreted Systems Programming Language), which is a set of CTLK specifcations to be checked. The result of this translationisa fle thata MCMAS model checker3 takes as input and checks whether or not the security goals are sat­isfed by the protocol. In most cases, this gives a counter-model if they are not satisfed by the protocol. The main contributionisa compiler from protocol descriptionsgiven in CAPSL into ISPL, the input language for MCMAS. The translation is optimised to limit the state explosion and beneft from MCMAS’s various optimisations. To do so, the authors developed PD2IS (Protocol Descriptions to In­terpreted Systems), an automatic compiler from CAPSL protocol descriptions to ISPL programs. This verifcation method assumes a bounded number of concurrent protocol sessions instantiated to run concurrently [8]. This CTLK approach makes use of the consistent mes­sage permutations introduced in Ref. [19] through a smart translation of security goals into CTLK formulas in order to prevent bad effects of the logical omniscience. However, CTLK is not omniscience-free. The approach models the Dolev-Yao attacker explicitly as an environment which has the capabilities of honest principals, can eavesdrop all com­munications, compose and replay messages into anyproto-col session, and perform cryptographic operators using his known keys. The attacker also has an identity and pub-licandprivatekeysthatmaybeusedto communicatewith other principals and record their send actions.Toavoid the state-explosion problem while model checking, MCMAS employs a fxed limited number of interleaved sessions. Moreover, it is assumed that the verifcation process does not support all types of nesting while encrypting messages and the messages are of fnite-length. These assumptions are applied because an attack to a protocol may be usually found in a small run of the protocol consisting of only a few sessions [48]. No proof system has been presented for the CTLK logic yet. So, this logic does not have anyproven soundness and completeness theorems [8]. However, one can develop a proof system with respect to the underlying semantics of CTLK and check if the logic is sound and complete accord­ 3Thisisa BDD-based model checkerfor multi-agentsystems support­ing temporal-epistemic specifcations [43] ing to that proof system. CTLK is powerful and fexible enough to be extended. As a variant of this logic, CTLS5n has been developed to model-check detectability of attacks in multi-agent systems that are automatically generated for security protocols [9]. As another extension of CTLK, CTLKR has been developed for the analysis of an e-voting protocol -calledFoo -witha passive attacker model who can link the receipt-providing principal and its vote [10]. Finally, ICTLK [12] has been developed for analyzing an unbounded number of concurrent protocol sessions. These extensions of CTLK use the same tools for verifying secu­rity protocols automatically. Contemporary to CTLK, Luo et al. provided a temporal epistemic logic -called ECKLn -that was applied in verifying the NSPK protocol using model checking techniques similarly.Todo so, the authors implemented and automatically verifed securityprotocols in a model checker for multiagent systems [9]. 4 Conclusions One of the main approaches for formal analysis of authen­tication protocols is using epistemic or temporal epistemic logics. The semantics of such logics should be able to model the protocol runs, including those executed by at­tackers. Then, theorem provers or model checkers arebuilt on these logics to analyze the protocols against intended properties. Comparing the logics in Figure 2, it seems that WS5 and its extensions are the best choices for protocol analysis since they are sound, complete, and omniscience-free. However, Figures 3 and 4 show that CTLK and its extensions are very powerful in security protocol analysis, make use of automatic compilers and model checkers, and prevent bad effects of logical omniscience while modeling the Dolev-Yao message deduction. It has been shown that it is an undecidable problem to fnd whether a security protocol is indeed secure or not [55]. Thus, it is practical to use trusted security protocols analyzedbydifferent formal methodstoprovidefast solu­tionsforexisting problemsofreallife systems.Atthe same time, it is practical to move toward the best formal system forverifying authentication protocols.Agood measure for fnding such a formal system for security protocol analysis is to prove that it can model the Dolev-Yao message deduc­tion. In this line, we investigated the epistemic and tempo­ral epistemic logics and showed how far their properties such as soundness, completeness, being omniscience-free, and expressiveness may affect the analysis of authentica­tion protocols. Some of these logics have no proof sys­tem, thus they have no soundness or completeness theo­rems. However, theymay apply model checkers for analy­sis and use awareness algorithms for preventing the logical omniscience problem while modeling the Dolev-Yao mes­sage deduction explicitly [30, 45, 8]. Preventing the logi­cal omniscience or avoiding its bad effects restricts a prin­cipal’s knowledge to what he can compute or derive with his knownkeys and messages. Some other epistemic log- S. Ahmadi et al. ics have proof systems, are omniscience-free, and model the attacker capabilities implicitly [19, 20, 4]. If such log­ics are logically sound, their derived judgments are more trustworthy. If they are complete, they may make use of automatic provers. Comparing Figures 2 and 4, the epistemic logics modeling the Dolev-Yao message deduction are either omniscience-free or prevent the bad effects of omniscience. In fact, provers are usually built on logics that preserve properties such as soundness, completeness, and being omniscience-free so that one can trust their output. At the same time, model checkers deal with the state explosion problem by imposing some assumptions so that they can verify security protocols within an acceptable time. How­ever, such assumptions may not stop us from using model checkers since they cover a wide range of security proto­cols and use existing tools. Finally, the expressiveness of such logics makes them powerful enough to both formal­ize attacker’s capabilities by logical formulas and analyze different classes of authentication protocols. Specifcally,if the logic has both temporal and epistemic modalities, it can analyze highly time-dependent protocols such as stream authentication protocols. Although it has been shown that logics using algorith­mic knowledge can model the Dolev-Yao message deduc­tive explicitly [30], to the best of our knowledge it has not been proven if a temporal epistemic logic of authentication can model such an attacker implicitly using permutation-based generalizedKripkesemantics.Thiscanbeatopicfor further research. As seen in this paper, all of the logics that model the attacker capabilities implicitly are omniscience-free. Thus, this can be a starting point for this topic of re­search. Finally, it would be benefcial if such an overview extends to the use of epistemic and temporal epistemic log­ics in analyzing other security/privacyproperties. References [1] M. Abadi and A.D. Gordon (1999) A calculus for cryptographic protocols: The spi calculus, Infor­mation and computation, Elsevier, 148(1): 1–70. https://doi.org/10.1006/inco.1998. 2740 [2] M. Abadi and P. Rogaway (2002) Reconciling two views of cryptography (the computational sound­ness of formal encryption), Journal of cryptol­ogy, Springer, 15(2): 103–127. http://dx.doi. org/10.1007/s00145-007-0203-0 [3] M. Abadi and M.R. Tuttle (1991), A semantics for a logic of authentication, Proceedings of the tenth annualACMsymposium on Principlesof distributed computing,ACM, 201–216. https://doi.org/ 10.1145/112600.112618 [4] S. Ahmadi and M.S.Fallah (2018) An Omniscience-Free Temporal Logic of Knowledge for Verify­ing Authentication Protocols, Bulletin of the Ira­nian Mathematical Society, Springer, 44(5): 1–23. https://doi.org/10.1007/s41980-018-0087-9 [5] F. Belardinelli and A. Lomuscio (2010) Interactions between time and knowledge in a frst-order logic for multi-agent systems, Proceedings of the Twelfth In­ternational Conference on the Principles of Knowl­edge Representation and Reasoning, AAAI Press, 38–48. [6] F. Belardinelli and A. Lomuscio (2012) Interactions between time and knowledge in a frst-order logic for multi-agent systems: completeness results, Journal of Artifcial Intelligence Research, AI Access Founda­tion, 1–45. https://doi.org/10.1613/jair.3547 [7]B. Blanchet,B.Smyth,andV.Cheval(2015)ProVerif 1.90: Automatic Cryptographic Protocol Verifer, User Manual andTutorial. [8] , I. Boureanu, M. Cohen, and A. Lomuscio (2009) Automatic verifcation of temporal-epistemic proper­ties of cryptographic protocols, Journal of Applied Non-Classical Logics,Taylor&Francis, 19(4): 463– 487. https://doi.org/10.3166/jancl.19.463-487 [9] I. Boureanu, M. Cohen, and A. Lomuscio (2010) Model checking detectability of attacks in multiagent systems, Proceedings of the 9th International Confer­ence onAutonomousAgents and Multiagent Systems, InternationalFoundation for Autonomous Agents and Multiagent Systems, 1: 691–698. [10] I. Boureanu, A.V. Jones, and A. Lomuscio (2012) Automaticverifcationof epistemic specifcations un­der convergent equational theories, Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, Interna­tionalFoundation for Autonomous Agents and Mul­tiagent Systems, 1141–1148. [11] I. Boureanu, P. Kouvaros, and A. Lomuscio (2016) MCMAS-S-An experimental model checker or the verifcation of security properties in unbounded multi-agent systems. https://vas.doc.ic. ac.uk/software/mcmas-extensions/ [12] I. Boureanu, P. Kouvaros, and A. Lomuscio (2016) Verifying Security Properties in Unbounded Multia-gent Systems, Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems,1209–1217. [13] C. Boydand A. Mathuria (2013)Protocols for authen­tication and key establishment, Springer Science & Business Media, 2013. https://doi.org/10. 1007/978-3-662-09527-0 Informatica 43 (2019) 161–175 173 [14] M. Burrows,M. Abadi, and R.M. Needham (1989)A logic of authentication, Proceedings of the Royal So­ciety of London A: Mathematical, Physical and En­gineering Sciences, The Royal Society, 426(1871): 233–271. [15] C.C. Chang and H.J. Keisler (1990) Model theory, North Holland, 73. [16] L. Chao, L. Hui, and M. Jianfeng (2009) Analy­sis the Properties of TLS Based onTemporal Logic of Knowledge, Proceedings of the 5th International Conference on Information Assurance and Security, IEEE, 2: 19–22. https://doi.org/10.1109/ ias.2009.49 [17] M. Cohen (2007) Logics of Knowledge and Cryptog­raphy: Completeness and Expressiveness, PhD The­sis, KTH, Stockholm, Sweden. [18] M. Cohen andM. Dam (2005)Acompleteness result forBAN logic, Prococeedings of Methods for Modal­ities, 4. [19] M. Cohen and M. Dam (2005) Logical omniscience in the semanticsofBAN logic, Proceedingsof theFoun­dations of Computer Security, 121–132. [20] M. Cohen and M. Dam (2007) A complete axiom­atization of knowledge and cryptography, Proceed­ings of the 22nd Annual IEEE Symposium on Logic in Computer Science, IEEE, 77–88. [21]F. Dechesne andY.Wang (2010)To know or not to know: epistemic approaches to security protocol ver­ifcation, Synthese,Springer,177(1): 51–76.https: //doi.org/10.1007/s11229-010-9765-8 [22] C. Dixon, M.C. Fernández Gago, M. Fisher, and W. van der Hoek (2007)Temporal logics of knowl­edge and their applications in security, Electronic Notes in Theoretical Computer Science, Elsevier, 186: 27–42. https://doi.org/10.1016/j. entcs.2006.11.043 [23] C. Dixon, M.C.F. Gago, M. Fisher, andW.Van Der Hoek (2004) Using temporal logics of knowledge in the formalverifcationof security protocols, Proceed­ings of the 11th International Symposium onTempo­ral Representation and Reasoning, IEEE,148–151. [24] D. DolevandA.Yao (1983)Onthe securityof public keyprotocols,Proceedingsof the IEEETransactions on Information Theory, IEEE, 29(2):198–208. [25] R. Fagin and J.Y. Halpern (1994) Reason­ing about knowledge and probability, Jour­nal of the ACM, ACM, 41(2):340–367. https://doi.org/10.1145/273865.274429 [26] R. Fagin, Y. Moses, J.Y. Halpern, and M.Y. Vardi (2003) Reasoning about knowledge, The MIT Press. https://doi.org/10.7551/mitpress/ 5803.001.0001 [27] L. Gong, R. Needham, and R. Yahalom (1990) Reasoning about belief in cryptographic proto­cols, Proceedings of the IEEE Computer Society Symposium on Research in Security and Privacy, IEEE, 234–248. https://doi.org/10.1109/ risp.1990.63854 [28] A.D. Gordon and A. Jeffrey (2003) Authenticity by typing for security protocols, Journal of computer se­curity, IOS Press, 11(4): 451–519. https://doi. org/10.3233/jcs-2003-11402 [29] G. Governatori, A.M. Orgun, and C. Liu (2008) Modal tableaux for verifying stream authentication protocols, JournalofAutonomousAgents and Multi Agent Systems. https://doi.org/10.1007/ s10458-007-9027-4 [30] J.Y. Halpern and R. Pucella (2003) Modeling adversaries in a logic for security protocol analysis, Formal Aspects of Security, Springer, 115–132. https://doi.org/10.1007/ 978-3-540-40981-6_11 [31] J.Y. Halpern and R. Pucella (2011) Dealing with logical omniscience: Expressiveness and pragmatics, Artifcial intelligence, Elsevier, 175(1): 220–235. https://doi.org/10.1016/j.artint. 2010.04.009 [32] J.Y. Halpern and M.R. Tuttle (1993) Knowl­edge, probability, and adversaries, Journal of the ACM,ACM,40(4):917–960. https://doi.org/ 10.1145/153724.153770 [33] J. Heather, G. Lowe, and S. Schneider (2003) How to prevent type faw attacks on security pro­tocols, Journal of Computer Security, IOS Press, 11(2): 217–244. https://doi.org/10.3233/ jcs-2003-11204 [34] G.E. Hughes and M.J. Cresswell (2012) Anew intro­duction to modal logic, Routledge. https://doi. org/10.4324/9780203028100 [35] G. Jakubowska, P. Dembinski, W. Penczek, and M. Szreter (2009) Simulation of Security Protocols based on Scenarios of Attacks, Fundamenta Informaticae, 93(1): 185-203. [36] A. Jurcut,T.Coffey, and R. Dojen (2013) Establish­ing and fxing security protocols weaknesses using a logic-based verifcation tool, Journal of Communi­cation, 8(11): 795–806. https://doi.org/10. 12720/jcm.8.11.795-805 S. Ahmadi et al. [37] A.D. Jurcut, T. Coffey, and R. Dojen (2014) De­sign guidelines for security protocols to prevent re­play & parallel session attacks, computers & secu­rity, Elsevier, 45: 255–273. https://doi.org/ 10.1016/j.cose.2014.05.010 [38] F. Kammuller and C.W. Probst (2015) Mod­eling and verifcation of insider threats using logical analysis, IEEE Systems Journal, IEEE, 11(2): 534–545. https://doi.org/10.1109/ jsyst.2015.2453215 [39] S. Kramer (2007) Logical concepts in cryptography, PhD Thesis, École Polytechnique Fédérale de Lau­sanne. [40] C. Liu and M. Orgun (1996) Dealing with multi­ple granularity of time in temporal logic program­ming, Journal of Symbolic Computation, Elsevier, 22(5): 699–720. https://doi.org/10.1006/ jsco.1996.0072 [41] C. Liu, M.A. Ozols, and M. Orgun (2004) A tem­poralised belief logic for specifying the dynamics of trust for multi-agent systems Advances in Computer Science-ASIAN 2004. Higher-Level Decision Mak­ing, Springer, 142–156. https://doi.org/10. 1007/978-3-540-30502-6_10 [42] C. Liu, M. Ozols, and M. Orgun (2005)Afbred be­lief logic for multi-agent systems, AI 2015: Advances in Artifcial Intelligence, Springer, 29–38. https: //doi.org/10.1007/11589990_6 [43] A. Lomuscio, H. Qu, and F. Raimondi (2009) MCMAS: A model checker for the verifcation of multi-agent systems, Computer Aided Verifca­tion, Springer, 682–688. https://doi.org/10. 1007/978-3-642-02658-4_55 [44] A. Lomuscio, H. Qu, and F. Raimondi (2015) MC­MAS: an open-source model checker for the veri­fcation of multi-agent systems International Jour­nal on Software Tools for Technology Transfer, Springer, 19(1): 9–30. https://doi.org/10. 1007/s10009-015-0378-x [45] A. Lomuscio and B.Wo´zna (2006)A complete and decidable security-specialised logic and its applica­tion to the TESLA protocol, Proceedings of the 5th international joint conference onAutonomousagents and multiagent systems, ACM, 145–152. https: //doi.org/10.1145/1160633.1160658 [46] G. Lowe (1995) An attack on the Needham-Schroeder public-key authentication protocol, Information processing letters, Elsevier, 56(3): 131–133. https://doi.org/10.1016/ 0020-0190(95)00144-2 [47] G.Lowe (1997)Afamilyof attacks upon authentica­tion protocols, epartment of Mathematics and Com­puter Science, University of Leicester. [48] G. Lowe (1998) Casper: A Compiler for the Anal­ysis of Security Protocols, Journal of Computer Se­curity, IOS Press, 6(1-2): 53–84. https://doi. org/10.3233/jcs-1998-61-204 [49] G. Lowe (2004) Analysing protocols subject to guess­ing attacks, Journal of Computer Security, IOS Press, 12(1): 83–97. https://doi.org/10.3233/ jcs-2004-12104 [50] X. Luo, Y. Chen, M. Gu, and L. Wu (2009) Model Checking Needham-Schroeder Security Pro­tocol Based onTemporal Logic of Knowledge, Pro­ceedings of the NSWCTC’09. International Confer­ence on Networks Security,Wireless Communications andTrusted Computing, IEEE, 2: 551–554. https: //doi.org/10.1109/nswctc.2009.384 [51] J. Ma, M.A. Orgun, and K. Adi (2011) An analytic tableau calculus for a temporalised be­lief logic, Journal of Applied Logic, Elsevier, 9(4): 289–304. https://doi.org/10.1016/ j.jal.2011.08.003 [52] J. Ma, M.A. Orgun, and A. Sattar (2009) Analysis of authentication protocols in agent-based systems us­ing labeled tableaux IEEETransactions on Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE, 39(4): 889–900. https://doi.org/10.1109/ tsmcb.2009.2019263 [53] J. Ma and K. Schewe (2012) A Temporalised Belief Logic for Reasoning about Authentication Protocols, Proceedings of the IEEE 11th Inter­national Conference on Trust, Security and Pri­vacy in Computing and Communications (Trust-Com), IEEE, 1721–1728. https://doi.org/ 10.1109/trustcom.2012.59 [54] J.K. Millen (1996) CAPSL: Common Authentication Protocol Specifcation Language, Proceedings of the 1996 workshop on New securityParadigms (NSPW), ACM, 96: 132. https://doi.org/10.1145/ 304851.304879 [55] J. Mitchell, A. Scedrov, N. Durgin, and P. Lincoln (1999) Undecidabilityof bounded security protocols, ProceedingsoftheWorkshoponFormal Methodsand Security Protocols. [56] M.A. Orgun, J. Ma, C. Liu, and G. Governatori (2006) Analysing stream authentication protocols in autonomous agent-based systems2nd IEEE Interna­tional Symposium on, Proceedings of the 2nd IEEE International Symposium on Dependable,Autonomic and Secure Computing, IEEE, 325–332. https:// doi.org/10.1109/dasc.2006.19 Informatica 43 (2019) 161–175 175 [57] A. Perrig, R. Canetti, J.D. Tygar, and D. Song (2000) Effcient authentication and signing of mul­ticast streams over lossy channels, Proceedings of the IEEE Symposium on Security and Pri­vacy, IEEE, 56–73. https://doi.org/10. 1109/secpri.2000.848446 [58] R. Ramanujam and S.P. Suresh (2005) Deciding knowledge properties of security protocols, Proceed­ings of the 10th conference on theoretical aspects of rationality and knowledge, National University of Singapore, 219–235. [59] R. Ramezanian (2015) Process Algebraic Model­ing of Authentication Protocols for Analysis ofPar­allel Multi-Session Executions, The ISC Interna­tional Journal of Information Security, Iranian So­ciety of Cryptology, 1(1): 55-67. https://doi. org/10.22042/isecure.2015.1.1.6 [60] P.F. Syverson (1993) Adding time to a logic of authentication, Proceedings of the 1st ACM con­ference on Computer and communications security, ACM, 97–101. https://doi.org/10.1145/ 168588.168600 [61]P.F.Syverson andP.C.Van Oorschot (1994)On uni­fying some cryptographic protocol logics, Proceed­ings of the IEEE Computer Society Symposium on Research in Security and Privacy, IEEE, 14–28. https://doi.org/10.21236/ada465512 [62] P. van Oorschot (1993) Extending cryptographic log­ics of belief to key agreement protocols, Proceed­ings of the 1st ACM Conference on Computer and Communications Security,ACM, 232–243. https: //doi.org/10.1145/168588.168617 BenchmarkProblemsfor Exhaustive Exact Maximum Clique Search Algorithms Sándor SzabInstitute of Mathematics and Informatics, University of Pécs Ifjuság utja 6, H-7624 Pécs, Hungary E-mail: sszabo7@hotmail.com Bogdán Zaválnij Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences Reáltanoda u. 13–15, H-1053 Budapest, Hungary E-mail: bogdan@renyi.hu Keywords:clique, maximal clique, maximum clique, graph coloring, random graph, Mycielski graph Received: February 12, 2019 There are well established widely used benchmark teststo assessthe performanceof practicalexact clique search algorithms.Inthispaperafamilyof further benchmarkproblemsisproposedmainlytotestexhaus­tive clique search procedures. Povzetek: Podanih je nekaj novih standardnih problemov za testiranje algoritmov za iskanje klik. 1 Preliminaries Let G =(V, E) be a fnite simple graph. Here V is the set of vertices of G and E is the set of edges of G. The fniteness of G means that G has fnitely manynodes and fnitely many vertices, that is, |V | < ., |E| < .. The simplicity of G means that G does not have anyloop and it does not have double edges. Let C be a subset of V . If two distinct nodes in C are always adjacent in G, then C is called a clique in G. When C has k elements, then we talk about a k-clique. We in­clude the cases k =0 and k =1 as well when |C| =0 and |C| =1, respectively. Though in these cases C does not have two distinct elements. A clique is maximal in G if it is not part of any larger clique in G. In other words a clique is maximal clique of the graph G if it cannot be extended to a larger clique by adding a new node of the graph G.A k-clique is a maxi­mum clique in G if G does not contain any (k + 1)-clique. Allthe maximum cliquesofagraph G have the same num­ber of nodes. We call this well defned number the clique number of G and we denote it by .(G). Anumber of problems is referred as clique search prob­lems [1]. Problem 1. Given a fnite simple graph G. List all maxi­mal cliques of G without repetition. Problem 2. Givenafnite simplegraph G and givenapos­itive integer k. Decide if G has a k-clique. Problem 3. Given a fnite simple graph G. Determine .(G). Problem 4. Given a fnite simple graph G. List all maxi­mum cliques of G without repetition. An algorithm to solve Problem 1 can be found in [2]. The algorithm is commonly known as Bron-Kerbosch al­gorithm. Obviously, the Bron-Kerbosch algorithm can be usedtosolve Problems2,3and4.A moreeffcientalgo­rithm to solve these problems was frst given in [3]. The algorithm is known under the name Caraghan-Pardalos al­gorithm. The Bron-Kerbosch and Carraghan-Pardalos al­gorithms are the classical algorithms that form the base of many further clique search procedures. These algorithms are presented in [14], [26], [24], [10], [9]. But this list is not intended to be complete. The complexity theory of the algorithm tells us that Problem2isinthe NP-complete complexity class. (Seefor instance [6].) Consequently, Problem3mustbe NP-hard. We color the vertices ofG such that the following con­ditions are satisfed. (1) Each vertex of G is colored with exactly one color. (2) Vertices of G connected by an edge receive distinct colors. Acoloring of the nodes ofG that satisfes conditions (1), (2) is called a legal coloring or well coloring of the nodes of G. Suppose that the nodes of G can be colored legally using k colors. We may use a mapf : V ›{1,...,k} to de­scribe a coloring of the nodes of G. The numbers 1,...,k play the roles of the colors and f(v) is the color of the ver­tex v for each v . V . If for adjacent nodes u and v of G the equation f(u)= f(v) implies u = v, then the coloring defnedby the map f is a legal coloring. There is a number of the colors k such that the nodes of G can be colored legally using k colors and the nodes of G cannot be colored legally using k - 1 colors. This well defned number k is called the chromatic number of the graph G anditis denotedby .(G). Graph coloringisavast subjectandwe cannotmakejus­tice to this venerable feld. In this paper we take a very narrow view. We are interested in only one particular ap­plication. Note that .(G) . .(G) holds for each fnite simple graph G andso coloringofthe nodes canbe usedto estimate the sizeofa maximum clique. However, thegap between .(G) and .(G) can be arbitrarily large. J. Myciel-ski [13] exhibited a graph M(k) for which .(M(k))=2 and .(M(k))= k for each integer k . 2. In order to fnd bounds for .(G) the following node col­oring was proposed in [21]. Let us choose an integer s . 2 and consider a coloring of the nodes of G that satisfes the following conditions. (10) Each vertex of G is colored with exactly one color. (20) If the vertices v1,...,vs are vertices of a clique in G, then all thevertices v1,...,vs cannot receivethe same color. Acoloring of the nodes ofG satisfying the conditions(10), (20), is called a monochrome s-clique free coloring. In short we will talk about s-clique free coloring. For s =2 the monochrome s-clique free coloring of the nodes gives back the legal coloring of the nodes. There is a well de­fned minimum number k such that the nodes of G have an s-clique free coloring using k colors. This k is referred to as the the s-clique free chromatic number of G and it is de­noted by .(s)(G). The inequality .(G) . (s - 1).(s)(G) shows that s-clique free coloring of the nodes can be used to establish upper bound for the clique number. Anumber of problems is considered in connection with coloring the nodes of a graph customarily. Problem 5. Givenafnite simplegraph G and givenapos­itive integer k. Decide whether the nodes of G admitalegal coloring using k colors. Problem 6. Given a fnite simple graph G. Determine .(G). It is a well known result of the complexity theory of al­gorithms that Problem5belongs to thePcomplexity class for k =2 and it belongs to the NP-complete complexity class for k . 3. (See for example [15].) It follows that for k . 3 Problem6is NP-hard. Problem 7. Givena fnite simplegraph G and two positive integers s, k. Decide if the nodes of G havea legal s-clique free coloring with k colors. It was established in [23] that for k =3, s . 3 Problem 7is NP-complete. 2 AMycielski type result As we have already mentioned the chromatic number can be a poor upper estimate of the clique number. By My­cielski’s construction there are 3-clique free graphs with arbitrarily large chromatic number. P. Erd˝ os [5] general­ized this result. Let us call the length of a shortest cordless circle in a graph the girth of the graph. Clearly, the girth of a 3-clique free graph must be at least 4. Erd˝ os has proved that forgiven positive integers k and l, thereisa fnite sim­ple graph G with girth(G) . l, .(G) . k. Erd˝ os’s proof is not constructive and so it is not at all straight forward how the resulting graphs could be used in constructing test instances. In this section we present another extension of Myciel­ski’s result.We replace thelegal coloringof the nodesofa graph by a legal s-clique free coloring of the nodes of the graph. Consequently, the s-clique free chromatic number .(s)(G) will play the role of the chromatic number .(G). The result is motivated by the fact that one might try to construct clique search test instances by replacing the Mycielski graph by the graph emerging from the proof the generalized version. Theorem 1. Let us choose two positive integers s and k with s . 3 and k . 2(s-1)/(s - 1). There is a fnite L(s,k) 2(s-1) simple graph such that .(L(s,k)) . and .(s)(L(s,k)) . k. The reader may notice that the graph L(2,k) is isomor­phic to the Mycielski graph M (k). Proof. The proof will be constructive. We start with the special case s =3. We choose an integer k for which k . 2(3-1)/(3 - 1) = 2. Let M(k) be the Mycielski graph with parameter k. Let u1,...,un be the nodes of M(k). We substitute the nodeui of M(k) by an isomorphic copy M(k) of the Mycielski graph for each i, 1 . i . n. Let i vi,1,,...,vi,n be the nodes of M(k) . We assume that the i nodes v1,1,...,v1,n,...,vn,1,...,vn,n are pair-wise distinct. These nodes will be the nodes of the graph L(3,k). The edges of M(k) are going to be edges of L(3,k) for i each i, 1 . i . n. Further, whenever the unordered pair {ui,uj } is an edge of M(k), then we add the edge {vi,.,vj,ß} to L(3,k) for each ., ß, 1 . ., ß . n. The dedicated reader will notfail to notice that the con­struction we just presented is the so called lexicographic product of the graphs M(k) and M(k). We claim that.(L(3,k)) . 4. In order to verify the claim we assume on the con­trary that .(L(3,k)) . 5. Let C be a 5-clique in L(3,k). Set Vi = {vi,1,,...,vi,n}. Note that the set Vi induces M(k) in L(3,k) as a subgraph of L(3,k). From thefact that i .(M(k) ) . 2 it follows that C may have at most 2 nodes i in Vi for each i, 1 . i . n. Therefore C has nodes in some Mi (k) for at least 3 distinct values of i. Suppose that i and j are distinct numbers in the set (k)(k) {1,...,n}. A node in Mand a node in Mcan be ij adjacent only if the unordered pair {ui,uj} is an edge of M(k). This means that M(k) containsa 3-clique. But M(k) does not contain any 3-clique. This contradiction com­pletes the verifcation of the claim. We claim that.(3)(L(3,k)) . k. In order to prove the claim let us assume on the contrary that .(3)(L(3,k)) . k - 1. Set V = V1 .· ··. Vn = {v1,1,...,v1,n,...,vn,1,...,vn,n}. Suppose that the map f : V ›{1,...,k - 1} defnes a 3-clique free coloring of the nodes of L(3,k). The restriction of f to the set Vi isa map gi : Vi › {1,...,k - 1}. Clearly, the map gi defnes a coloring of (k)(k) the nodesof the graph M. From thefact that .(M) . ii k it follows that there are two distinct adjacent nodes of (k) Msuch that the two nodes receive the same color ci. i Set U = {u1,...,un}. Using the color ci we defne a map h : U ›{1,...,k - 1}. We set h(ui)= ci for each i, 1 . i . n. Note that the map h defnes a legal coloring of the nodes of the graph M(k). The only thing which needsverifcation is that if ui and uj are distinct adjacent nodes of M(k),then ci 6 = cj. Let us assume on the contrary that ui and uj are distinct adjacent nodes of M(k) and ci = cj. The graph M(k) has i two distinct adjacent nodes vi,i(1) and vi,i(2) such that f(vi,i(1))= f(vi,i(2))= ci. Similarly, the graph M(k) has two distinct adjacent nodes j vj,j(1) and vj,j(2) such that f(vj,j(1))= f(vj,j(2))= cj. Note that the nodes vi,i(1), vi,i(2), vj,j(1), vj,j(2) are the nodes of a 4-clique in L(3,k). This means that the coloring defned by the map f is nota 3-clique free coloring of the nodes of L(3,k). This shows that the coloring defned by the map h is a legal coloring of the nodes of M(k). The coloring defned by h is using at most k - 1 colors. This contradicts thefact that .(M(k)) . k. Thus we may conclude that .(3)(L(3,k)) . k as we claimed. Let us turn to the special case s =4.Wechooseainteger k for which k . 2(4-1)/(4 - 1), that is, k . 3. Let M(k) be the Mycielski graph with parameter k. Let u1,...,un be the nodes of M(k). We substitute the node ui of M(k) (3,k) by an isomorphic copy Lof the graph L(3,k) for each i (3,k) i, 1 . i . n. Let vi,1,,...,vi,m be the nodes of L.We i assume that the nodes v1,1,...,v1,m,...,vn,1,...,vn,m are pair-wise distinct. These nodes will be the nodes of the graph L(4,k). Informatica 43 (2019) 177–186 179 (3,k) The edges of Lare going to be edges of L(4,k) for i each i, 1 . i . n. Further, whenever the unordered pair {ui,uj } is an edge of M(k), then we add the edge {vi,.,vj,ß} to L(4,k) for each ., ß, 1 . ., ß . m. We claim that.(L(4,k)) . 8. In order to verify the claim we assume on the contrary that .(L(4,k)) . 9. Let C be a 9-clique in L(4,k). Note (3,k) in L(4,k) that the set Vi = {vi,1,,...,vi,m} induces L i (3,k) as a subgraph of L(4,k). From thefact that .(L) . 4 i it follows that C may have at most 4 nodes in Vi for each (3,k) i, 1 . i . n. Therefore C has nodes in some Lfor at i least 3 distinct values of i. Suppose that i and j are distinct numbers in the set (3,k) (3,k) {1,...,n}. A node in Land a node in Lcan ij be adjacent only if the unordered pair {ui,uj } is an edge of M(k). This means that M(k) contains a 3-clique. But M(k) does not contain any 3-clique. This contradiction completes the proof of the claim. We claim that.(4)(L(4,k)) . k. In order to prove the claim let us assume on the contrary that .(4)(L(4,k)) . k - 1. Set V = V1 .· ··. Vn = {v1,1,...,v1,n,...,vn,1,...,vn,n}. Suppose that the map f : V ›{1,...,k - 1} defnes a 4-clique free coloring of the nodes of L(4,k). The restriction of f to the set Vi isa map gi : Vi › {1,...,k - 1}. Clearly, the map gi defnes a coloring (3,k) of the nodes of the graph Li . From the fact that .(3)(L(3,k) (3,k) ) . k it follows that thereisa 3-clique in L ii such that the 3 nodes of the clique receive the same color ci. Set U = {u1,...,un}. Using the color ci we defne a map h : U ›{1,...,k - 1}. We set h(ui)= ci for each i, 1 . i . n. Note that the map h defnes a legal coloring of the nodes of the graph M(k). The only thing which needsverifcation is that if ui and uj are distinct adjacent nodes of M(k),then ci 6 = cj. Let us assume on the contrary that ci = cj . The graph (3,k) Lhas 3 distinct pair-wise adjacent nodes vi,i(1),vi,i(2), i vi,i(3) such that f(vi,i(1))= f(vi,i(2))= f(vi,i(2))= ci. (3,k) Similarly, the graph Lhas 3 distinct pair-wise adjacent i nodes vj,j(1), vj,j(2), vj,j(2) such that f(vj,j(1))= f(vj,j(2))= f(vj,j(3))= cj. Note that the nodes vi,i(1),vi,i(2),vi,i(3),vj,j(1),vj,j(2),vj,j(3) are the nodes of a 6-clique in L(4,k). This means that the coloring defnedbythe map f is nota 4-clique free coloring of the nodes of L(4,k). This shows that the coloring defned by the map h is a legal coloring of the nodes of M(k). In the coloring defned by h most k - 1 colors occur. This contradicts thefact that .(M(k)) . k. Thus we may draw the conclusion that .(4)(L(4,k)) . k as we claimed. Continuing in this way we can complete the proof of the theorem. 2 3 Test problems Problems2 and3 are commonly referred as k-clique and maximum clique problems, respectively. As we have pointed out it is a well known result of the complexity the­ory of algorithms that the maximum clique problem is NP-hard. Loosely speaking it can be interpreted such that the maximum clique problem is computationally demanding. As at this moment there are no readily available mathe­maticaltoolstoevaluatethe performanceof practicalclique search algorithms, the standard procedure is to carry out numericalexperiments ona batteryof well selected bench­mark tests. The most widely used test instances are the Erd˝ os–Rényi random graphs, graphs from the the second DIMACS chal­lenge1 [8], combinatorial problems of monotonic matrices [25, 22], and hard coding problems of Deletion-Correcting Codes2 [20]. Evaluating the performances of various clique search al­gorithms is a delicate matter. On one hand one would like to reach some practically relevant conclusion about the competing algorithms. On the other hand this conclusion is based on a fnite list of instances. One has to be ever cautious not to draw overly sweep­ing conclusions from these inherently limited nature ex­periments. (We intended to contrast this approach to the asymptotic techniques which are intimately tied to infnity.) The situation is of course not completely pessimistic. Af­ter all, these benchmarks were successful at shedding light on the practicality of manyof the latest clique search pro­cedures. However, we should strive for enhancing the test procedures. The main purpose of this paper is to propose new benchmark instances. There are occasions when we are trying to locate a large clique in a given graph such that the clique is not necessar­ily optimal. This approach is referred as non-exact method to contrast it to the exhaustive search. For instance con­structing a large time table in this way can be practically important and useful even without a certifcate of optimal­ity. The benchmark tests are of course relevant in connec­tion with non-exact procedures too. In order to avoid any unnecessary confusionwewouldlikeemphasizethatinthis paper we are focusing solely on the exact clique search methods. Let n be a positive integer and let p be a real number such that 0 . p . 1. An Erd˝ os-Rényi random graph with 1ftp://dimacs.rutgers.edu/pub/challenge/ 2http://neilsloane.com/doc/graphs.html S. Szabo et al. parameters n, p is a graph G with vertices 1, 2,...,n. The probability that the unordered pair {x, y} is an edge of G is equal to p for each x, y, 1 . x