{"id":27689,"date":"2024-03-16T09:00:31","date_gmt":"2024-03-16T09:00:31","guid":{"rendered":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/"},"modified":"2024-03-22T11:38:41","modified_gmt":"2024-03-22T11:38:41","slug":"how-to-utilize-mapreduce-in-hadoop","status":"publish","type":"post","link":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/","title":{"rendered":"How to utilize MapReduce in Hadoop?"},"content":{"rendered":"<p>To use Hadoop&#8217;s MapReduce, you need to follow these steps:<\/p>\n<ol>\n<li>Define the Map function: The Map function is the process of splitting input data into key-value pairs. You need to write a Map function to define how the input data is transformed into key-value pairs.<\/li>\n<li>Definition of the Reduce function: The Reduce function is the process of handling the key-value pairs outputted by the Map function. You need to write a Reduce function to define how to process the key-value pairs outputted by the Map function.<\/li>\n<li>Configure a MapReduce job: You&#8217;ll need to utilize Hadoop&#8217;s configuration file to set various parameters for the MapReduce job, such as input path, output path, Map function, and Reduce function.<\/li>\n<li>Running MapReduce jobs: You can submit and run MapReduce jobs using Hadoop&#8217;s command line tools or programming interfaces.<\/li>\n<\/ol>\n<p>Here is an example code using Hadoop MapReduce:<\/p>\n<pre class=\"post-pre\"><code><span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.conf.Configuration;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.fs.Path;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.io.IntWritable;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.io.Text;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.mapreduce.Job;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.mapreduce.Mapper;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.mapreduce.Reducer;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.mapreduce.lib.input.FileInputFormat;\r\n<span class=\"hljs-keyword\">import<\/span> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;\r\n\r\n<span class=\"hljs-keyword\">import<\/span> java.io.IOException;\r\n<span class=\"hljs-keyword\">import<\/span> java.util.StringTokenizer;\r\n\r\n<span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title class_\">WordCount<\/span> {\r\n\r\n  <span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">static<\/span> <span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title class_\">TokenizerMapper<\/span>\r\n       <span class=\"hljs-keyword\">extends<\/span> <span class=\"hljs-title class_\">Mapper<\/span>&lt;Object, Text, Text, IntWritable&gt;{\r\n\r\n    <span class=\"hljs-keyword\">private<\/span> <span class=\"hljs-keyword\">final<\/span> <span class=\"hljs-keyword\">static<\/span> <span class=\"hljs-type\">IntWritable<\/span> <span class=\"hljs-variable\">one<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">IntWritable<\/span>(<span class=\"hljs-number\">1<\/span>);\r\n    <span class=\"hljs-keyword\">private<\/span> <span class=\"hljs-type\">Text<\/span> <span class=\"hljs-variable\">word<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">Text<\/span>();\r\n\r\n    <span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">void<\/span> <span class=\"hljs-title function_\">map<\/span><span class=\"hljs-params\">(Object key, Text value, Context context\r\n                    )<\/span> <span class=\"hljs-keyword\">throws<\/span> IOException, InterruptedException {\r\n      <span class=\"hljs-type\">StringTokenizer<\/span> <span class=\"hljs-variable\">itr<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">StringTokenizer<\/span>(value.toString());\r\n      <span class=\"hljs-keyword\">while<\/span> (itr.hasMoreTokens()) {\r\n        word.set(itr.nextToken());\r\n        context.write(word, one);\r\n      }\r\n    }\r\n  }\r\n\r\n  <span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">static<\/span> <span class=\"hljs-keyword\">class<\/span> <span class=\"hljs-title class_\">IntSumReducer<\/span>\r\n       <span class=\"hljs-keyword\">extends<\/span> <span class=\"hljs-title class_\">Reducer<\/span>&lt;Text,IntWritable,Text,IntWritable&gt; {\r\n    <span class=\"hljs-keyword\">private<\/span> <span class=\"hljs-type\">IntWritable<\/span> <span class=\"hljs-variable\">result<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">IntWritable<\/span>();\r\n\r\n    <span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">void<\/span> <span class=\"hljs-title function_\">reduce<\/span><span class=\"hljs-params\">(Text key, Iterable&lt;IntWritable&gt; values,\r\n                       Context context\r\n                       )<\/span> <span class=\"hljs-keyword\">throws<\/span> IOException, InterruptedException {\r\n      <span class=\"hljs-type\">int<\/span> <span class=\"hljs-variable\">sum<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-number\">0<\/span>;\r\n      <span class=\"hljs-keyword\">for<\/span> (IntWritable val : values) {\r\n        sum += val.get();\r\n      }\r\n      result.set(sum);\r\n      context.write(key, result);\r\n    }\r\n  }\r\n\r\n  <span class=\"hljs-keyword\">public<\/span> <span class=\"hljs-keyword\">static<\/span> <span class=\"hljs-keyword\">void<\/span> <span class=\"hljs-title function_\">main<\/span><span class=\"hljs-params\">(String[] args)<\/span> <span class=\"hljs-keyword\">throws<\/span> Exception {\r\n    <span class=\"hljs-type\">Configuration<\/span> <span class=\"hljs-variable\">conf<\/span> <span class=\"hljs-operator\">=<\/span> <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">Configuration<\/span>();\r\n    <span class=\"hljs-type\">Job<\/span> <span class=\"hljs-variable\">job<\/span> <span class=\"hljs-operator\">=<\/span> Job.getInstance(conf, <span class=\"hljs-string\">\"word count\"<\/span>);\r\n    job.setJarByClass(WordCount.class);\r\n    job.setMapperClass(TokenizerMapper.class);\r\n    job.setCombinerClass(IntSumReducer.class);\r\n    job.setReducerClass(IntSumReducer.class);\r\n    job.setOutputKeyClass(Text.class);\r\n    job.setOutputValueClass(IntWritable.class);\r\n    FileInputFormat.addInputPath(job, <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">Path<\/span>(args[<span class=\"hljs-number\">0<\/span>]));\r\n    FileOutputFormat.setOutputPath(job, <span class=\"hljs-keyword\">new<\/span> <span class=\"hljs-title class_\">Path<\/span>(args[<span class=\"hljs-number\">1<\/span>]));\r\n    System.exit(job.waitForCompletion(<span class=\"hljs-literal\">true<\/span>) ? <span class=\"hljs-number\">0<\/span> : <span class=\"hljs-number\">1<\/span>);\r\n  }\r\n}\r\n<\/code><\/pre>\n<p>This sample code is a simple word count program. It splits each word in the input file into key-value pairs, then counts the occurrence of each word. Lastly, it outputs each word along with its corresponding frequency.<\/p>\n<p>You can use Hadoop&#8217;s command line tool to package the code into a JAR file, and then submit and run the MapReduce job using the following command.<\/p>\n<pre class=\"post-pre\"><code>hadoop jar WordCount.jar WordCount input output\r\n<\/code><\/pre>\n<p>WordCount is the name of the JAR file you package, input refers to the input file path, and output refers to the output file path.<\/p>\n<p>Before running MapReduce jobs, make sure to install and configure the Hadoop cluster and ensure that the cluster is running.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To use Hadoop&#8217;s MapReduce, you need to follow these steps: Define the Map function: The Map function is the process of splitting input data into key-value pairs. You need to write a Map function to define how the input data is transformed into key-value pairs. Definition of the Reduce function: The Reduce function is the [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_import_markdown_pro_load_document_selector":0,"_import_markdown_pro_submit_text_textarea":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-27689","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v21.5 (Yoast SEO v21.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to utilize MapReduce in Hadoop? - Blog - Silicon Cloud<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to utilize MapReduce in Hadoop?\" \/>\n<meta property=\"og:description\" content=\"To use Hadoop&#8217;s MapReduce, you need to follow these steps: Define the Map function: The Map function is the process of splitting input data into key-value pairs. You need to write a Map function to define how the input data is transformed into key-value pairs. Definition of the Reduce function: The Reduce function is the [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\" \/>\n<meta property=\"og:site_name\" content=\"Blog - Silicon Cloud\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/SiliCloudGlobal\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-16T09:00:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-22T11:38:41+00:00\" \/>\n<meta name=\"author\" content=\"Benjamin Taylor\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@SiliCloudGlobal\" \/>\n<meta name=\"twitter:site\" content=\"@SiliCloudGlobal\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Benjamin Taylor\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\"},\"author\":{\"name\":\"Benjamin Taylor\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/ac801fe9549a25960ce48aa2e0a691c9\"},\"headline\":\"How to utilize MapReduce in Hadoop?\",\"datePublished\":\"2024-03-16T09:00:31+00:00\",\"dateModified\":\"2024-03-22T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\"},\"wordCount\":262,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/#organization\"},\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\",\"url\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\",\"name\":\"How to utilize MapReduce in Hadoop? - Blog - Silicon Cloud\",\"isPartOf\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/#website\"},\"datePublished\":\"2024-03-16T09:00:31+00:00\",\"dateModified\":\"2024-03-22T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.silicloud.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to utilize MapReduce in Hadoop?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#website\",\"url\":\"https:\/\/www.silicloud.com\/blog\/\",\"name\":\"Silicon Cloud Blog\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/#organization\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#organization\",\"name\":\"Silicon Cloud Blog\",\"url\":\"https:\/\/www.silicloud.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.silicloud.com\/blog\/wp-content\/uploads\/2023\/11\/EN-SILICON-Full.png\",\"contentUrl\":\"https:\/\/www.silicloud.com\/blog\/wp-content\/uploads\/2023\/11\/EN-SILICON-Full.png\",\"width\":1024,\"height\":1024,\"caption\":\"Silicon Cloud Blog\"},\"image\":{\"@id\":\"https:\/\/www.silicloud.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/SiliCloudGlobal\/\",\"https:\/\/twitter.com\/SiliCloudGlobal\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/ac801fe9549a25960ce48aa2e0a691c9\",\"name\":\"Benjamin Taylor\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ec2e3d3e2d525fd148047c4520ae7c1cdccd1f4b48a1a488422b31f04f345c14?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ec2e3d3e2d525fd148047c4520ae7c1cdccd1f4b48a1a488422b31f04f345c14?s=96&d=mm&r=g\",\"caption\":\"Benjamin Taylor\"},\"url\":\"https:\/\/www.silicloud.com\/blog\/author\/benjamintaylor\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How to utilize MapReduce in Hadoop? - Blog - Silicon Cloud","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/","og_locale":"en_US","og_type":"article","og_title":"How to utilize MapReduce in Hadoop?","og_description":"To use Hadoop&#8217;s MapReduce, you need to follow these steps: Define the Map function: The Map function is the process of splitting input data into key-value pairs. You need to write a Map function to define how the input data is transformed into key-value pairs. Definition of the Reduce function: The Reduce function is the [&hellip;]","og_url":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/","og_site_name":"Blog - Silicon Cloud","article_publisher":"https:\/\/www.facebook.com\/SiliCloudGlobal\/","article_published_time":"2024-03-16T09:00:31+00:00","article_modified_time":"2024-03-22T11:38:41+00:00","author":"Benjamin Taylor","twitter_card":"summary_large_image","twitter_creator":"@SiliCloudGlobal","twitter_site":"@SiliCloudGlobal","twitter_misc":{"Written by":"Benjamin Taylor","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#article","isPartOf":{"@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/"},"author":{"name":"Benjamin Taylor","@id":"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/ac801fe9549a25960ce48aa2e0a691c9"},"headline":"How to utilize MapReduce in Hadoop?","datePublished":"2024-03-16T09:00:31+00:00","dateModified":"2024-03-22T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/"},"wordCount":262,"commentCount":0,"publisher":{"@id":"https:\/\/www.silicloud.com\/blog\/#organization"},"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/","url":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/","name":"How to utilize MapReduce in Hadoop? - Blog - Silicon Cloud","isPartOf":{"@id":"https:\/\/www.silicloud.com\/blog\/#website"},"datePublished":"2024-03-16T09:00:31+00:00","dateModified":"2024-03-22T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.silicloud.com\/blog\/how-to-utilize-mapreduce-in-hadoop\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.silicloud.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How to utilize MapReduce in Hadoop?"}]},{"@type":"WebSite","@id":"https:\/\/www.silicloud.com\/blog\/#website","url":"https:\/\/www.silicloud.com\/blog\/","name":"Silicon Cloud Blog","description":"","publisher":{"@id":"https:\/\/www.silicloud.com\/blog\/#organization"},"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.silicloud.com\/blog\/#organization","name":"Silicon Cloud Blog","url":"https:\/\/www.silicloud.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.silicloud.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.silicloud.com\/blog\/wp-content\/uploads\/2023\/11\/EN-SILICON-Full.png","contentUrl":"https:\/\/www.silicloud.com\/blog\/wp-content\/uploads\/2023\/11\/EN-SILICON-Full.png","width":1024,"height":1024,"caption":"Silicon Cloud Blog"},"image":{"@id":"https:\/\/www.silicloud.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/SiliCloudGlobal\/","https:\/\/twitter.com\/SiliCloudGlobal"]},{"@type":"Person","@id":"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/ac801fe9549a25960ce48aa2e0a691c9","name":"Benjamin Taylor","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.silicloud.com\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ec2e3d3e2d525fd148047c4520ae7c1cdccd1f4b48a1a488422b31f04f345c14?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ec2e3d3e2d525fd148047c4520ae7c1cdccd1f4b48a1a488422b31f04f345c14?s=96&d=mm&r=g","caption":"Benjamin Taylor"},"url":"https:\/\/www.silicloud.com\/blog\/author\/benjamintaylor\/"}]}},"_links":{"self":[{"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/posts\/27689","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/comments?post=27689"}],"version-history":[{"count":1,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/posts\/27689\/revisions"}],"predecessor-version":[{"id":61939,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/posts\/27689\/revisions\/61939"}],"wp:attachment":[{"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/media?parent=27689"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/categories?post=27689"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.silicloud.com\/blog\/wp-json\/wp\/v2\/tags?post=27689"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}