✅ What Is Crawlability?
Table of Content
What is crawlability? Crawlability refers to a website’s ability to be accessed and navigated by search engine bots (also called web crawlers or spiders), such as Googlebot, Bingbot, or Yandex Bot. These bots systematically visit websites to read and collect content for indexing and ranking in search engines.
Think of crawlability as the “door” that lets search engines into your website. If that door is open and easy to navigate, bots can find all your important content. If it’s blocked, hidden, or confusing, search engines may miss entire sections of your site—even your best content.
Why Crawlability Matters:
- Search Visibility: If a page isn’t crawlable, it won’t appear in Google or any search engine’s results.
- Organic Traffic: Crawlability is the first step toward attracting organic traffic. No crawl = no index = no traffic.
- SEO Performance: Even if your page is perfectly optimized with keywords, meta tags, and engaging content, none of that matters if bots can’t access it.
What Affects Crawlability:
- Technical errors (e.g., 404, 503, slow load times)
- Improper robots.txt or meta tag settings
- Lack of internal links
- Overuse of JavaScript for loading content
- Complex site structures or deeply buried pages
In short, crawlability ensures your content has a chance to rank. Without it, your SEO efforts are invisible to search engines.
✅ How Search Engines Crawl Your Site
Search engines like Google use automated software called crawlers (or bots/spiders) to browse and scan the web continuously. Their job is to:
- Discover new pages,
- Understand existing pages,
- Index useful content for future search queries.
This process begins with a list of known URLs and sitemaps. From there, crawlers follow links (both internal and external) to explore new content. This discovery process is what we refer to as “crawling.”
Step-by-Step Breakdown of the Crawling Process:(What is crawlability?)
- Start with Known URLs
Search engines maintain a list of previously discovered URLs. They begin crawling from these “seeds,” often starting with your homepage and sitemap entries. - Follow Internal & External Links
Bots follow hyperlinks from one page to another. The more links pointing to a page, the easier it is to find. This is why internal linking and backlinks are so important. - Scan Page Content
Once on a page, the crawler reads the HTML code, extracts visible text, and notes down key elements like titles, headings, image alt text, and links. - Evaluate Crawl Priority
Search engines don’t crawl every page on every visit. They apply a crawl budget—an estimated number of pages they’re willing to crawl based on your site’s authority, structure, and server speed. - Queue Pages for Indexing
After crawling, search engines decide whether a page should be indexed. If the page is valuable, unique, and allowed (i.e., not blocked by noindex tags or robots.txt), it gets added to the index. - Re-Crawl Over Time
Crawlers return periodically to check for updates or changes. Popular or important pages (e.g., homepage, blog posts) are crawled more frequently.
Important Notes: What is crawlability?
Slow Response = Reduced Crawling: If pages load slowly or show frequent errors (5xx codes), crawlers may reduce how often they visit.
Crawlability ≠ Indexing: Just because a bot crawls a page doesn’t mean it will be indexed. The page still needs to meet quality standards and indexing criteria.
Broken Links = Dead Ends: If internal links are broken, bots can’t continue crawling.
JavaScript Barriers: If critical content loads only after JavaScript executes and bots can’t render it, that content may not be seen at all.
✅ What Is Crawlability? Complete Guide + Related Terms Explained
Whether you’re an SEO professional, a parent exploring child development, or someone interested in construction and tech, you’ve likely come across terms like crawlability, crawler cranes, or crawling in babies. This guide answers all your crawl-related questions in one place, explaining what these terms mean in their unique contexts.
🌐 What Is Crawlability? (SEO Definition)
Crawlability refers to the ability of search engine bots—like Googlebot—to access, navigate, and read the content of a website. It’s a foundational concept in technical SEO.
💡 Crawlability definition: The measure of how easily search engines can crawl a site’s pages using internal links and navigation.
If your website isn’t crawlable, search engines can’t discover or index your content, which means it won’t appear in search results. Improving crawlability involves optimizing internal linking, fixing broken links, and managing crawl depth.
🧭 What Is Crawl Control? What is crawlability?
Crawl control is the process of managing how and when search engine bots crawl your website. Tools like Google Search Console and Bing Webmaster Tools allow you to adjust crawl frequency and monitor bot activity.
Proper crawl control prevents:
- Overloading your server
- Wasting crawl budget on irrelevant pages
- Crawlers skipping high-priority content
📏 What Is Crawl Depth? What is crawlability?
Crawl depth is the number of clicks it takes from the homepage to reach a specific page on your site.
- Shallow pages (1–3 clicks away) are crawled more frequently.
- Deep pages (4+ clicks) may be ignored or crawled less often.
Reducing crawl depth improves website crawlability and ensures critical pages are indexed.
🏗️ What Is a Crawler Crane?
Switching gears to construction: a crawler crane is a large, heavy-duty crane that moves on crawler tracks. It’s used for lifting extremely heavy materials and is ideal for rough or soft terrain.
Unlike wheeled cranes, crawler cranes offer better stability and are used in mega infrastructure projects.
🪢 What Is a Crawler Harness?
In safety and climbing, a crawler harness is a type of harness worn to support the body during crawling movements, often used in cave exploration, military drills, or extreme sports.
⚙️ What Is Crawl Rated? What is crawlability?
The term crawl often refers to off-road vehicle components—like tires or gearboxes—that are optimized for low-speed, high-torque crawling over rocks or uneven surfaces. It signifies durability and strength under harsh conditions.
📊 What Is Crawl Data? What is crawlability?
Crawl data includes all the information collected by search engine crawlers, such as:
- Page titles
- Metadata
- Link structures
- HTTP status codes
You can view this data using tools like Google Search Console, Screaming Frog, or Ahrefs to fix crawl errors and improve indexing.
🐛 What Is Crab Crawling?
Crab crawling is a physical movement pattern—sideways motion typically done in physical therapy, fitness, or child motor development activities.
This is different from standard crawling, which moves forward or backward.
👶 What Is Crawling in Babies?
Crawling in babies is a developmental milestone where infants begin moving on hands and knees, usually between 6–10 months of age.
👶 What Is Crawling for Babies? What is crawlability?
This refers to the overall process and benefits of baby crawling, which include:
- Strengthening muscles
- Improving motor coordination
- Enhancing spatial awareness
🚸 What Is Normal Crawling Age? What is crawlability?
The normal crawling age ranges between 6 to 10 months, but it can vary depending on the child’s development pace. Some babies skip crawling entirely and go straight to walking.
⚖️ What Is the Difference Between Delve and Dive?
This question often pops up in language usage:
- Delve implies digging deeply into a subject or issue.
- Dive suggests a sudden plunge, often physically or metaphorically (e.g., dive into a project).
🧗 What Is Crawl, Walk, Run? What is crawlability?
Crawl-Walk-Run is a strategy model that emphasizes starting with basics (crawl), building competence (walk), and finally optimizing or scaling (run). It’s used in:
- Business transformation
- Project management
- Learning and development
🧱 What Is Crawl Foundation?
This term applies to construction. A crawl foundation (or crawl space foundation) elevates a house a few feet off the ground, allowing easy access to plumbing and wiring. It’s common in humid regions and offers ventilation.
🎬 What Is Crawl (2019) About? What is crawlability?
Crawl (2019) is a survival horror film where a woman and her father are trapped in a flooded home during a hurricane, while being hunted by alligators. It’s known for its suspense and creature-feature thrills.
🕸️ What Is Website Crawlability?
Website crawlability is the overall ease with which bots can explore and understand the structure and content of a website. It depends on:
- Internal linking
- XML sitemaps
- Server response times
- Robots.txt and meta tags
⚙️ What Is a Crawl Ratio?
In automotive terms, a crawl ratio is the gear ratio that determines how slowly and powerfully a vehicle moves in low gear. A higher crawl ratio is ideal for rock crawling or steep off-road climbs.
📘 What Is Lopes Write? What is crawlability?
LopesWrite is an academic writing tool used by students at Grand Canyon University to check for grammar, citations, and plagiarism. It’s similar to Turnitin and helps maintain academic integrity.
🧠 What Is Creeping vs Crawling? What is crawlability?
In child development:
- Creeping is when babies move using their stomachs (army crawl).
- Crawling is when babietheir s move using their hands and knees.
Both are normal, but crawling usually follows creeping as babies gain strength and coordination.
🌍 What Is Crawlability? What is crawlability?
Crawlability is simply the Dutch word for “crawlability.” It has the same SEO-related meaning, referring to how accessible a website is to search engine bots.
🧱 Bonus: Why Crawlability Matters for SEO
Whether you’re asking what website crawlability is or what crawl depth is, the answer leads back to this: If your content isn’t crawlable, it won’t be found.
Improving crawlability ensures:
- Higher indexation rates
- Better SEO visibility
- Improved rankings and organic traffic
✅ Final Thoughts: What is crawlability?
From crawler cranes to crawl data, and from baby crawling milestones to SEO crawlability, crawling shows up in many parts of life and business. Whether you’re working on a website, parenting a toddler, or gearing up your 4×4 for off-road crawling, understanding these terms empowers you to act smarter.
📌 Want to boost your website’s crawlability? What is crawlability?
Audit your site using:
A Complete Guide to SEO Title Optimization
✅ What Helps Googlebot Crawl Your Site Effectively
To help Googlebot and other search engine crawlers explore your website efficiently, your site needs to be crawl-friendly, well-structured, and free of roadblocks. The smoother the experience for bots, the more likely your important content get indexed and ranked.
Let’s dive deeper into the core elements that help crawlers do their job:
🔹 1. XML Sitemap: What is crawlability?
An XML sitemap is a file that lists your website’s most important pages. It helps search engines:
- Identify which URLs you want crawled and indexed.
- Discover newly published content faster.
- Prioritize pages based on update frequency and importance.
A good sitemap includes:
- Core pages like the homepage, services, contact, blogs, etc.
- Canonical versions of URLs (no duplicate or redirected pages)
- Excludes noindex or irrelevant pages
Best Practices: What is crawlability?
- Use plugins like Yoast SEO, All in One SEO, or Rank Math to auto-generate your sitemap in WordPress.
- Submit your sitemap to Google Search Console and Bing Webmaster Tools.
- Keep your sitemap updated automatically as you add or remove content.
🔹 2. Internal Links: What is crawlability?
Internal links are hyperlinks that connect one page of your website to another. These links act as pathways for both users and crawlers, allowing bots to:
- Navigate your site structure
- Discover related content
- Distribute page authority (link equity)
Why They Matter:
- If a page isn’t linked internally, bots may never find it.
- The more internal links a page receives, the more important it appears to search engines.
Best Practices:
- Add contextual links inside content (not just in the menu).
- Link new pages from older, authoritative pages.
- Use keyword-rich anchor text that reflects the destination page.
Boost Your SEO with Smart Internal Linking – Maximize Rankings Now!
🔹 3. Clear Site Structure
A clear site structure means your website is logically organized, making it easy for crawlers (and users) to find pages quickly. Ideally, all key pages should be accessible within 2 to 3 clicks from the homepage.
Why It Matters:
- Google uses internal links and structure to understand page importance.
- Deeply nested pages (e.g., 5+ clicks from the homepage) may be ignored or crawled less frequently.
How to Structure Your Site:
- Use categories and subcategories for blogs or product listings.
- Avoid unnecessary directories or subfolders.
- Maintain consistent navigation with menus, breadcrumbs, and footers.
🔹 Bonus: Other Factors That Help Googlebot: What is crawlability?
- Fast-loading pages: Improves crawl efficiency.
- Mobile responsiveness: Googlebot uses a mobile-first approach.
- Clean URLs: Avoid URLs with long parameters and session IDs.
- No crawl errors: Ensure your pages return an HTTP 200 OK status.
✅ Common Crawlability Issues (and How to Fix Them)
Even well-built websites can face crawlability issues due to technical errors, misconfigurations, or poor site hygiene. Here are the most frequent crawl blockers and how to fix them.
❌ 1. Broken Internal Links: What is crawlability?
Broken internal links occur when a link leads to a page that no longer exists or returns a 404 error. This results in crawl dead ends, wasted crawl budget, and poor user experience.
Example:
Your blog links to /ebook
But that page was deleted or renamed.
Fix:
- Use tools like Screaming Frog, Ahrefs, or Semrush to find broken links.
- Redirect them (301) to the correct page or remove the link entirely.
- Regularly audit your site for broken links.
❌ 2. Orphan Pages: What is crawlability?
Orphaned pages have no internal links pointing to them. Search engines can’t find these pages unless they are in your sitemap or linked externally.
Example:
You created a landing page for a limited-time event, but didn’t link to it from your site navigation, homepage, or blog.
Fix:
- Identify orphaned pages using crawl tools or analytics.
- Link them from relevant articles, the menu, or the footer.
- Add them to your sitemap.
❌ 3. Blocked by robots.txt
The robots.txt
file is located at yourdomain.com/robots.txt
and tells crawlers which areas of the site they can or cannot access. Mistakenly blocking important paths can completely hide them from search engines.
Example:
Your robots.txt
contains:
bashCopyEditDisallow: /blog/
This prevents Googlebot from crawling the entire blog section.
Fix:
- Review
robots.txt
regularly (especially after site redesigns). - Remove or adjust disallow rules blocking key sections.
- Allow important folders like
/blog/
,/products/
, or/services/
.
Additional Common Issues to Consider (for later sections):
- Pages are buried too deeply in the hierarchy (fixed with better structure).
- Noindex or canonical misconfiguration (blocks indexing even if crawled).
- Slow loading pages (affects crawl frequency).
- Redirect chains and loops (confuse crawlers and users).
🔧 Pro Tip: What is crawlability?
Use this basic checklist to keep your pages crawlable:
Task | Status |
---|---|
Page is linked internally | ✅/❌ |
Included in XML sitemap | ✅/❌ |
Not blocked by robots.txt | ✅/❌ |
No broken internal links | ✅/❌ |
No noindex/canonical mistakes | ✅/❌ |
Loads fast and returns 200 OK | ✅/❌ |
❌ 4. Misused noindex
or Canonical Tags
🔍 What It Means:
- A
noindex
the tag is an HTML meta tag that tells search engines not to include a page in their search index, even if it’s crawlable. - A canonical tag (
rel=canonical
) is used to inform search engines which version of a similar or duplicate page is the preferred one to index.
💥 The Problem:
These tags are essential for managing duplicate content and excluding low-value pages from search. But if used incorrectly, they can unintentionally block important pages from showing up in search results.
🚫 Examples of Misuse:
- Leaving a
noindex
tag on a product page that you want to rank. - Adding a canonical tag that points to the wrong URL or an irrelevant page.
- Applying a global noindex rule through a plugin or CMS without checking individual pages.
- Using canonical tags for pagination (e.g., all paginated blog pages are canonicalized to the first page), which causes the rest to be ignored.
🔧 Fix:
- Perform a technical SEO audit using tools like Screaming Frog, Sitebulb, or Semrush to scan for
noindex
tags and canonical mismatches. - Check for any system-wide settings (e.g., Yoast or Rank Math configurations) applying
noindex
to content types like categories, tags, or product pages. - For canonical tags:
- Ensure they point to the correct version of the page (not a staging URL, homepage, or unrelated product).
- Use canonicals to consolidate duplicate pages, not to hide content accidentally.
✅ Best Practices:
- Only apply
noindex
to pages like:- Thank-you pages
- Login/dashboard pages
- Internal tracking URLs
- Only apply canonical tags to pages that:
- Have near-identical content
- Use sorting/filtering query parameters
- Are versions of the same product or article
❌ 5. Pages Buried Too Deep in Site Architecture
🔍 What It Means:
If a page requires more than 3–4 clicks from the homepage, search engines may view it as less important and crawl it less frequently—or miss it entirely.
This concept is called click depth. The deeper a page is buried, the less likely it is to be crawled or indexed efficiently.
💥 The Problem:
- Googlebot typically begins crawling from the homepage and navigates through internal links.
- If key pages like blog posts, landing pages, or products are not linked from menus, hubs, or major pages, they risk becoming hidden.
- Deep content leads to poor SEO performance and poor user experience.
🧭 Example:
You publish an ultimate guide, but it’s:
- Not linked in your main menu
- Only linked from one old blog post
- Takes 5+ clicks to reach from the homepage
Even though the content is valuable, Googlebot may treat it as low-priority due to its buried position.
🔧 Fix:
- Flatten your site structure by reducing the number of clicks needed to reach core pages.
- Use category pages, tags, or hub pages to organize and surface buried content.
- Add links to important pages in:
- Main navigation
- Footer menus
- Sidebar widgets
- Blog posts (contextual linking)
- Monitor click depth with tools like:
- Screaming Frog: Check the “Crawl Depth” column
- Google Search Console: Use internal linking reports
✅ Best Practices:
Highlight evergreen or high-performing content in the top navigation or sidebar.
Keep vital pages within 2–3 clicks from the homepage.
Create internal link hubs that group similar content and make deep pages more accessible.
Technical Factors That Block Crawlers
Besides structural issues, technical problems can also block or slow down crawling:
1. Server Errors (5xx Codes)
If your server is down or overloaded, crawlers may receive error codes like 503 or 504. Frequent errors reduce crawl frequency.
Fix: Use reliable hosting and monitor uptime.
2. Slow Page Speed
Slow-loading pages use up crawl budget and can be skipped by bots.
Fix: Optimize images, remove unnecessary scripts, and test your speed with tools like PageSpeed Insights.
3. JavaScript Rendering Issues
If content loads only after JavaScript execution, crawlers may not see it unless they can render the script.
Fix: Use server-side rendering or preload important content in the HTML.
4. Redirect Chains or Loops
Multiple or circular redirects confuse crawlers and may block access to pages.
Fix: Use straight, minimal redirects. Audit your redirects using tools like Screaming Frog.
How to Test and Monitor Crawlability
Use these tools to check how search engines interact with your site:
1. Google Search Console (GSC)
Check which pages are indexed and find errors that may prevent crawling.
2. URL Inspection Tool (GSC)
Test a specific page to see if it’s indexed, blocked, or encountering issues.
3. Server Log Analysis
Shows which pages Googlebot has visited. Helpful for identifying patterns and missed content.
4. Semrush Site Audit
Highlights crawlability issues such as blocked pages, broken links, and missing metadata.
5. Screaming Frog & Similar Tools
Simulate a site crawl and list problems like orphaned pages or redirect loops.
Optimizing Crawl Paths and Internal Linking
Improving how pages connect helps both crawlers and users:
1. Use a Flat Site Structure
Keep pages accessible within a few clicks of the homepage.
2. Add Contextual Links Inside Content
Internal links within paragraphs help bots understand topic relationships.
3. Link to High-Value Pages Often
Link important pages frequently so they’re crawled and indexed more often.
4. Avoid Linking to Low-Priority Pages
Too many links to thin or outdated pages can waste crawl budget.
Crawlability vs. Indexability: Understanding the Difference
- Crawlability = Can bots access the page?
- Indexability = Can the page be stored in search engine databases and shown in search results?
A page may be crawlable but not indexed if:
- It has a noindex tag
- A canonical tag points to another page
- The content is thin or duplicated
- It’s blocked by meta tags or headers
How to Troubleshoot Crawlability and Indexability
To fix crawl issues:
- Use GSC’s URL Inspection Tool to check if the page is crawled.
- Confirm it’s not blocked by
robots.txt
. - Ensure the page returns a valid status code (200).
- Remove any noindex tags if indexing is intended.
- Check if the canonical tags point correctly.
Make Crawlability the First Part of Your Publishing Checklist
Before publishing a new page, ask:
- Is it linked from another page?
- Is it included in your sitemap?
- Does it load quickly and return a 200 status code?
- Is it free of noindex or incorrect canonical tags?
Checking crawlability early helps avoid delays in indexing and ranking.
🔎 Crawlability FAQ: Everything You Need to Know
1. What is crawlability in SEO?
Answer:
Crawlability refers to the ability of search engine bots (like Googlebot) to access and navigate the pages on your website. If your site is crawlable, search engines can discover and scan your content, which is the first step toward indexing and ranking.
2. How does crawlability differ from indexability?
Answer:
- Crawlability is about access—whether bots can reach a page.
- Indexability is about inclusion—whether the page is allowed to be stored in the search engine’s index and appear in search results.
A page must be crawlable first to even be considered for indexing.
3. Why is crawlability important for SEO?
Answer:
If a page is not crawlable:
- It won’t appear in search engine results.
- Your website’s organic visibility is reduced.
- Valuable content may go undiscovered by search engines.
Improving crawlability helps search engines find, assess, and rank your content efficiently.
4. What is a crawl budget?
Answer:
The crawl budget is the number of pages a search engine will crawl on your site during a given period. It’s based on:
- The size of your site
- The server response speed
- The importance of your pages
Wasting crawl budget on low-value or broken pages can cause key pages to be missed.
5. What are the most common crawlability issues?
Answer:
- Broken internal links
- Orphan pages (no internal links pointing to them)
- Robots.txt is blocking important sections
- Misused noindex or canonical tags
- Excessive click depth (pages buried too deep)
- Server errors or downtime
- JavaScript rendering problems
- Redirect chains or loops
6. How do I know if a page is crawlable?
Answer:
Use these tools to test crawlability:
- Google Search Console (GSC) – See if the page is indexed.
- GSC URL Inspection Tool – Check crawl and indexing status.
- Screaming Frog – Simulates how search engines crawl your site.
- Server logs – Show actual bot activity and missed pages.
7. What is an orphan page? Why is it a problem?
Answer:
An orphan page is a page with no internal links pointing to it. Search engines may never discover it unless it’s included in the sitemap. If important pages are orphaned, they can be excluded from indexing and search results.
8. What does “pages buried too deep” mean?
Answer:
Pages that are more than 3-4 clicks away from the homepage are considered deeply buried. These pages are less likely to be crawled frequently. Search engines give preference to pages that are easier to access.
9. How does the robots.txt file affect crawlability?
Answer:
The robots.txt
file tells search engines which parts of your site they can or cannot crawl. Incorrect settings can unintentionally block important content (like blogs or product pages), making it invisible to bots.
10. What is the difference between noindex and disallow?
Answer:
- noindex: Allows crawling but tells search engines not to include the page in search results.
- disallow (in robots.txt): Tells search engines not to crawl the page or directory at all.
Both can reduce visibility if used on important pages unintentionally.
11. What are canonical tags, and how can they affect crawlability?
Answer:
A canonical tag tells search engines which version of a similar or duplicate page to treat as the main one. If misused, it can confuse crawlers and cause them to ignore a page that you want indexed.
12. How does internal linking help with crawlability?
Answer:
Internal links guide bots from one page to another. If a page is not linked internally, bots may not find it. Proper internal linking ensures all content is accessible and can be crawled efficiently.
13. How can slow page speed hurt crawlability?
Answer:
Slow-loading pages:
- Waste crawl budget
- May be skipped or crawled less often
- Create a poor user experience
Optimize images, reduce scripts, and test your site using PageSpeed Insights to improve crawl performance.
14. What role does JavaScript play in crawlability?
Answer:
Some websites use JavaScript to load content. If the content appears only after JavaScript runs, and search engine bots can’t render it, then it might not be crawled or indexed.
Use server-side rendering or ensure important content is visible in the raw HTML.
15. What are redirect chains and loops, and why are they bad?
Answer:
- A redirect chain happens when one URL redirects to another, and then another, etc.
- A loop is when URLs redirect in a circle (A → B → C → A).
Both confuse bots and can block access to content.
16. How often should I check my site for crawlability issues?
Answer:
It’s best to perform:
- Monthly site audits for small to medium sites
- Weekly or daily checks for large or frequently updated websites
Use tools like Semrush, Ahrefs, or Screaming Frog for regular scans.
17. Should I include every page in my XML sitemap?
Answer:
No. Only include:
- Pages you want indexed
- High-quality, original content
- Canonical URLs
Avoid listing pages with noindex tags or duplicate versions.
18. What checklist should I follow before publishing a new page?
Answer:
- ✅ Linked internally
- Added to sitemap
- Loads fast with 200 status code
- No noindex tag
- Correct canonical tag
This ensures the page is crawlable and indexable from day one.
19. Can crawlability issues affect my ranking?
Answer:
Yes. If search engines can’t find or access your content:
- It won’t be indexed
- It can’t rank
- You lose traffic opportunities
Fixing crawlability is foundational to SEO success.
20. Where can I monitor crawl statistics?
Answer:
Use Google Search Console > Settings > Crawl Stats to view:
- Total crawl requests
- Average response time
- Crawl distribution by response code
This helps track how often and how deeply Google crawls your site.
.aioseo-author-bio-compact { display: flex; gap: 40px; padding: 12px; text-align: left; border: 1px solid black; border-radius: 5px; color: #111111; background-color: #FFFFFF; } .aioseo-author-bio-compact-left { flex: 0 0 120px; } .aioseo-author-bio-compact-right { flex: 1 1 auto; } .aioseo-author-bio-compact-left .aioseo-author-bio-compact-image { width: 120px; height: 120px; border-radius: 5px; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-header { display: flex; align-items: center; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-header .author-name { font-size: 22px; font-weight: 600; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-header .author-job-title { margin-left: 12px; padding-left: 12px; font-size: 18px; border-left: 1px solid gray; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-main { margin: 12px 0; font-size: 18px; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-main > p:last-of-type { display: inline; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-main .author-bio-link { display: inline-flex; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-main .author-bio-link a { display: flex; align-items: center; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-main .author-bio-link a svg { fill: black; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-footer .author-expertises { display: flex; flex-wrap: wrap; gap: 10px; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-footer .author-expertises .author-expertise { padding: 4px 8px; font-size: 14px; border-radius: 4px; background-color: #DCDDE1; color: inherit; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-footer .author-socials { margin-top: 12px; display: flex; gap: 6px; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-footer .author-socials .aioseo-social-icon-tumblrUrl { margin-left: -2px; } .aioseo-author-bio-compact-site-editor-disclaimer { color: black; margin-bottom: 12px; font-style: italic; } @media screen and (max-width: 430px ) { .aioseo-author-bio-compact { flex-direction: column; gap: 20px; } .aioseo-author-bio-compact-left .aioseo-author-bio-compact-image { display: block; margin: 0 auto; width: 160px; height: 160px; } .aioseo-author-bio-compact-right { text-align: center; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-header { justify-content: center; } .aioseo-author-bio-compact-right .aioseo-author-bio-compact-footer .author-socials { justify-content: center; } }