This is a screenshot of the original frontpage, which looks a little clumsy. Minimal Grid
To test the loading time of the current site i installed this wp-cli extension called code-profiler
wp code-profiler run --url=https://ilminster.net/
➤ 15 plugins and 1 theme
➤ Execution time: 0.4013s
➤ Peak memory: 64.43 MB
➤ File I/O operations: 7,343
➤ SQL queries: 51
➤ Accuracy: Highest
After testing out a few Wordpress themes, I settled on Go which is fast, free and powerful. It is also a simple design. There is an activate go support page
The features i want over the old theme include:
So I set about updating my localhost version to see how it works.
to install the go theme with wp-cli i entered
wp theme install go
and activated it in the themes page.
It required a bit of change to a few pages as it also uses the new guttenburg blocks editor and I had been resisting changing to that so I disabled the classic editor plugin.
I like the look of it and hope you do to.
]]>It was indexed within an hour on both DuckDuckGo and Bing, but still not indexed directly on Google.
I have written a tweet about it, but wanted to try out the different AI's to try and solve why it isnt featured in Google.
So firstly I asked Google Bard and it couldnt see the page
After having a good search, I basically discovered I cannot submit a github repo and will have to wait for google to index it with time.
]]>I have made an 'Awesome perplexity.ai' page on github which is a list of awesome links to perplexity.ai and the world it is creating.
I have changed the default search engine in my broswer to perplexity.ai
and made a twitter list with links to twitter perplexity staff
and now i want to know how long it will take until perplexity.ai knows about the 'Awesome perplexity.ai' page on github
Here is a link to a search looking for the awesome perplexity.ai page, but i am not sure if it will update in realtime.
very cool
update 18.14 Bing has found it
update 18.23 Duckduckgo has found it
]]>It is good that it has been indexed so quickly and is the top result for this search, but the fix seems to be adding the end slash to the url.
the last github issue was an issue about trailing slashes, so my fix is to hardcode the endslash in the blog list and the pagination which i think is the problem.
I will monitor the situation with these links and see what happens as this is indexed over the next 24 hours.
]]>This blog is an attempt by me to match what i can offer to those that want some sort of service that i offer.
I have written about the things I can offer on this blog which is currently linked from each of these pages in the icon bar at the top of the page.
What this blog is about is exploring how people search for people who use these technologies and try and get them to find me. A fair number of these skillsets overlap, but I have tried to focus on logical units
Infrastructure as Code Cloud automation Ansible freelancer Ansible consultant DevOps automation freelancer Infrastructure automation freelancer Configuration management freelancer Ansible playbooks Ansible roles Ansible modules Ansible Galaxy Ansible Tower Terraform providers Terraform modules Terraform state Terraform Cloud Terraform freelancer UK Terraform consultant Terraform infrastructure as code expert DevOps with Terraform Cloud automation with Terraform AWS Terraform freelancer Azure Terraform consultant GCP Terraform specialist Multi-cloud Terraform expertise Terraform security best practices
AWS Linux freelance AWS Linux consulting Linux on AWS AWS for startups Cloud migration Linux AWS DevOps AWS automation Security hardening AWS Linux AWS CloudFormation consulting Ansible automation Linux on AWS Terraform configuration for AWS Linux Jenkins configuration for AWS Cloudwatch monitoring for AWS Linux AWS Lambda development Cloudtrail logging configuration AWS security groups and IAM roles Troubleshooting AWS Linux issues AWS Linux for healthcare AWS Linux for SaaS
Linux Bash shell scripting Bash shell scripting freelancer Bash scripting expert Automation scripting Linux Shell scripting services Hire Bash scripter Linux script writer Shell scripting for DevOps System administration scripting Bash scripting for automation Cloud automation scripting Network automation scripting Security scripting Linux Big data scripting Bash Scripting for web development
DevOps freelancer DevOps Engineer Automation Specialist IT automation Infrastructure as code AWS Devops Azure Devops GCP Devops
Laravel developer freelance PHP freelance developer Web development freelance Laravel Hire Laravel freelancer Expert Laravel programmer freelance PHP/Laravel consultant
Tailwind CSS developer Tailwind CSS designer Front-end developer UI/UX designer Responsive web design Pixel-perfect development Custom website design Landing page design Web app development Shopify website development WordPress website development CSS framework Utility-first CSS Rapid prototyping Performance optimization Pixel perfect design Mobile-first design
WordPress freelancer WordPress developer WordPress designer Hire a WordPress expert WordPress website development WordPress website maintenance WordPress e-commerce development WordPress theme customization WordPress plugin development WordPress security WordPress SEO optimization WordPress speed optimization WordPress website migration WordPress content creation WordPress backup and recovery WordPress troubleshooting
What I am actually interested in, is what people ACTUALLY type into search engines when looking for roles in these areas.
]]>I have used Docker quite a bit in the past and it allows one to create a Dockerfile which is a set of instructions to build an image which should work in any docker environment. I come from a PHP / Laravel background, so I think my first project will be to get a Kubernetes install to serve a Laravel app on AWS with a single node.
This is the corresponding github repo and is the dockerfile i will use as i follow this video series.
To get up and running with kubernetes, i decided to try minikube. I have added this to the ansible playbook but the first run of the minikube start
command compalined that there was no driver
then i used minikube start --driver=docker
which resulted in a permission error like so:
Exiting due to PROVIDER_DOCKER_NEWGRP: "docker version --format -:" exit status 1: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix /var/run/docker.sock: connect: permission denied
Suggestion: Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker'
so i added the user, but this didnt fix the issue, but this did
chmod 666 /var/run/docker.sock
now i can set the default to docker with minikube config set driver docker
so now starting minikube produced:
😄 minikube v1.30.1 on Debian kali-rolling
✨ Using the docker driver based on user configuration
📌 Using Docker driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.26.3 preload ...
> preloaded-images-k8s-v18-v1...: 397.02 MiB / 397.02 MiB 100.00% 4.14 Mi
> gcr.io/k8s-minikube/kicbase...: 373.53 MiB / 373.53 MiB 100.00% 3.09 Mi
🔥 Creating docker container (CPUs=2, Memory=2800MB) ...
🐳 Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
❗ Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 18.517914147s
💡 Restarting the docker service may improve performance.
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.26.3.
▪ Want kubectl v1.26.3? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
but minikube dashboard
didnt fancy it.
❗ Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 2.222619049s
💡 Restarting the docker service may improve performance.
🔌 Enabling dashboard ...
▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
💡 Some dashboard features require the metrics-server addon. To enable all features please run:
minikube addons enable metrics-server
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
panic: send on closed channel
goroutine 107 [running]:
k8s.io/minikube/cmd/minikube/cmd.readByteWithTimeout.func2()
/app/cmd/minikube/cmd/dashboard.go:192 +0x67
created by k8s.io/minikube/cmd/minikube/cmd.readByteWithTimeout
/app/cmd/minikube/cmd/dashboard.go:187 +0x158
I think there was an issue with the ram, as i had so many browser windows searching for the answers, but shutting most of them down, allowed me to start the dashboard and i saw this
Which is a great start, I am now going to work out how to deploy a Dockerfile app! wish me luck
]]>Kubernetes, a container orchestration platform, has revolutionized the way applications are deployed, managed, and scaled. Its popularity has soared in recent years, making it an essential skill for anyone seeking to advance their career in DevOps or cloud computing.
Why Choose Christmas for Kubernetes Learning?
The Christmas break offers a unique opportunity to delve into the intricacies of Kubernetes without the distractions of everyday commitments. With ample time on your hands, you can dedicate yourself to learning at your own pace and focus on mastering the core concepts without the pressure of deadlines or meetings.
Benefits of Learning Kubernetes This Winter
Enhance Your Tech Skills: Kubernetes is a complex technology, but mastering it can significantly elevate your skillset. Learning Kubernetes will make you a more valuable asset to any organization, opening up new career opportunities and earning potential.
Stay Ahead of the Curve: Kubernetes is the de-facto standard for container orchestration, and its adoption is only going to grow in the years to come. By learning Kubernetes now, you'll be well-positioned to ride this wave of technological innovation.
Personal Accomplishment: Mastering a challenging technology like Kubernetes is a significant personal achievement. It will boost your confidence and resilience, preparing you for future learning and professional challenges.
How to Embark on Your Kubernetes Learning Journey
Start with the Basics: Begin by understanding the fundamentals of containers and Docker, the foundation upon which Kubernetes is built.
Explore Online Courses and Tutorials: There are numerous high-quality online resources available to guide you through the learning process. Choose courses that align with your learning style and experience level.
Practice and Experiment: Hands-on practice is essential for solidifying your understanding of Kubernetes. Set up a personal Kubernetes cluster and experiment with deploying, managing, and scaling applications.
Join the Kubernetes Community: Engage with the active Kubernetes community through forums, online groups, and social media. Seek help, share knowledge, and contribute to the open-source project.
Learning Kubernetes during the Christmas break is an excellent way to invest in your future and prepare for the ever-evolving landscape of software development. With dedication and perseverance, you can master this powerful technology and reap the rewards of a rewarding career in cloud computing and DevOps. So, grab your laptop, embrace the spirit of learning, and embark on your Kubernetes journey today!
]]>One of the best features of the Debian Linux environment is apt-get, apt or aptitude. You can search for and install software with ease, you can update the system when you want and it is reliable. With this in mind, I have discovered Chocolatey the package manager for windows and it is great.
So far i have installed Filezilla, Firefox, QuiteRss, Sublime Text and VS Code with this sort of command I will use to install some more
choco install chocolateygui
choco install libreoffice-fresh
choco install opera
choco install zeal
There is a chocolatey search engine where you can search for software. or you can search with
choco search wordpress
which provides a similar search to 'apt-cache search wordpress'
I tried to install 'googlechrome' but there was an error with the checksum, so that didn't work, but i have Edge, Firefox, Brave and Opera Browsers to use too, so thats no biggie!
I have used Windows a lot in the past and after installing Git, I managed to use Git Bash, which is more like a normal linux terminal.
It would be nice to have a WAMP type PHP and mysql server to play with too, i wonder what is installed with just composer, will it install dependencies?
choco install composer
There are also windows only tools, so I can also install this seo tool:
choco install screamingfrog
and also try this
choco install photogimp
This all makes it a breeze to install software in a predictable manner.
]]>./racecards.py today
So my weekends coding project was a php parser to analyse this json and suggest the horses who will win.
I installed 2 packages with composer
composer require cerbero/json-parser
composer require nunomaduro/termwind
The parser was to simplify the json parsing and the termwind to colorize the output.
<?php
require __DIR__ . '/vendor/autoload.php';
use Cerbero\JsonParser\JsonParser;
use function Termwind\{render};
$source = '../racecards/2023-12-04.json';
$json = JsonParser::parse($source);
foreach ($json as $key => $value) {
if ($key == "GB"){
foreach ($value as $key => $value) {
echo "-----------------------" . PHP_EOL;
render('<div class="px-1 text-red">' . $key . '</div>');
echo "-----------------------" . PHP_EOL;
foreach ($value as $key => $value) {
render("<p class='text-blue'>". $key ." " . $value['race_name'] ." ". "<span class='text-green'>". $value['prize']. "</span></p>"); // time
foreach ($value as $key => $value) {
if ($key == "runners"){
foreach ( $value as $horse) {
$firstOrSecondOrThird = substr_count($horse['form'], '1') + substr_count($horse['form'], '2') + substr_count($horse['form'], '3') ;
$BadForm = substr_count($horse['form'], 'P');
$negs = - $BadForm;
if (!$horse['rpr']) {$negs--;}
if (!$horse['ts']) {$negs--;}
if (!$horse['trainer_rtf']) {$negs--;}
$score = $horse['ofr'] + $horse['rpr'] + $horse['ts'];
$darkout = "";
if ($negs < 0){$darkout = "text-gray-700";}
if ($firstOrSecondOrThird < 1){$darkout = "text-gray-700";}
render(" <p class='p-0 pl-1 m-0 ".$darkout ." '> ".
str_pad($horse['number'], 2, ".", STR_PAD_LEFT) . " " .
str_pad($horse['form'], 6, ".", STR_PAD_LEFT) . " " .
'<span class="text-yellow">' . str_pad( $horse['name'], 20, ".", STR_PAD_RIGHT ) . '</span> ' . " " .
str_pad($horse['age'], 2, ".", STR_PAD_LEFT) . " " .
str_pad($horse['lbs'], 3, ".", STR_PAD_LEFT) . " " .
str_pad($horse['ofr'], 3, ".", STR_PAD_LEFT) . " " .
str_pad($horse['rpr'], 3, ".", STR_PAD_LEFT) . " " .
str_pad($horse['ts'], 3, ".", STR_PAD_LEFT) . " " .
str_pad($horse['last_run'], 10, ".", STR_PAD_LEFT) . " " .
str_pad($horse['jockey'], 22, ".", STR_PAD_RIGHT) . " " .
str_pad($horse['trainer'], 34, ".", STR_PAD_RIGHT) . " " .
str_pad($horse['trainer_location'], 28, ".", STR_PAD_RIGHT) . " " .
str_pad($horse['trainer_rtf'], 3, ".", STR_PAD_LEFT) . " " .
$firstOrSecondOrThird . " " .
$score . " " .
$negs .
"</p>". PHP_EOL);
}
}
}
}
}
}
}
This provided a pretty output with a basic value to predict the best horses judging by the form.
the last 3 columns are a positive factor, total ratings and a negative factor. The greyed out horses are those that failed the basic tests I have coded which are rather basic, but a start.
The next version will load the horses for the race into and array and output them in the order of the score, also it will fade out non-runners and calculate the score in a function. I would also like to output a header for the columns/
I couldn't get the str_pad function to work with spaces, so I used dots which is ok.
I am interested in how people weight the factors, at the moment, it is all a factor of 1, with a negative sign being a missing value. I am going to test it out over the next few days and see how it works.
]]>I can assist you in running your website in several ways. I can help you create content for your website, such as blog posts, articles, and marketing materials. I can also help you optimize your website for search engines so that it appears higher in search results. Additionally, I can help you monitor your website's traffic and performance so that you can make data-driven decisions about how to improve your website.
I have used Macs, Windows and Linux for over 25 years and have a vast range of knowledge which can be useful to any firm. I can fix most problems and get you up and running. visit my IT Support page on ilminster.net
This is a configuration tool which allows you to run a command to setup 1 or more computers in a logical and systematic way. I really like it and have used it to setup my main computer. Here is a link to my ansible playbook on github
Bash shell scripting is a powerful tool for DevOps engineers and system administrators to automate tasks, manage infrastructure, and perform backups. Bash scripts are simple text files that contain a series of commands that are executed by the Bash shell. They can be used to perform a wide variety of tasks, such as installing software, configuring servers, and backing up data.
My main language is PHP and I have used Laravel and Wordpress for many years. Get in touch if you need help with your website.
I am comfortable with PHP and Laravel, CSS, Bootstrap, Tailwind CSS and understand web technologies.
I moved to using Debian Linux as my main computer many years ago, and have a fair amount of knowledge of running linux as a desktop. As part of my focus on Devops, I have also experience with Centos, Ubuntu and other Linux systems (alongside Windows and Apple Mac)
If you need any of these services I am available to work at the moment. Get in touch...
]]>Starting with a new hard drive, i installed the latest Debian 12 os with a usb stick. It is far quicker than years ago, and i had an OS installed in less than an hour.
I also used an ansible playbook I made in the past, which installed browsers, text editors, games and more. This makes the process much easier, but I had to update it a little to add a few more apps to be installed and remove a couple that have disappeared from the debian repo. Another example is the rapid rate of new php versions. I have been using php 8.1 but now version 8.3 of php is available in the debian repo.
So after about an hour, ansible had installed all the software including a lamp server stack, including php versions 8.0, 8.1 and 8.2, apache and a maria/mysql database.
I keep regular backups of the /www directory with the 4 main projects I work on and backups of the databases are easy to reimport.
But one problem I found was the computer didnt sleep / suspend properly. I noticed that the kernel image was only 6.1 so I have decided to add a Kali Linux repo into the apt sources which should give me a 6.5 kernel which hopefully will let me fix the suspend problem. This was solved by changing the default display manager to gdm3 from lightdm as this was set in the install and it did work as I wanted
edit '/etc/X11/default-display-manager' and change it to
/usr/sbin/gdm3
in the setup process i had to chose between the window manager lightdm and gdm3. changing it to gdm3 fixed my shutdown, brightness and boot issue.
ALTER USER 'root'@'localhost' IDENTIFIED BY 'newPassword';
flush privileges;
exit;
I have over 25 years of web experience and recently moved to Somerset for a simpler life. I am looking for a project (possibly Wordpress based) to get my teeth into. I have a vast range of experience from Graphics, Design, Web Development, Copywriting, Web Servers, Cloud Computing and more.
The reason I like Wordpress, is that I started with a website for the local festival, it was running well, but just needed tidying up. I found a new theme and improved all the content and the festival went well.
Then I registered a domain to try and sell my services and developed a theme from scratch and started work on a plugin. This is going live this week and is a mix of pages for the services, with blog posts about the jobs I have done as I do them. I like this combination as it means I can write more posts without the pages being changed.
But this leads me to a number of questions:
What plugin features would you like to see?
I have developed a basic plugin using a few hooks and subpages, but would like to hear about what should I build to impress you.
What theme specs are you looking for?
I have worked with both tailwind css and bootstrap alongside jquery, but again, what would you like to see.
If you have any wordpress roles or projects, lets discuss them. I tend to use cv-library for job searches, so my cv is on there.
]]># update the wp-cli tool itself
wp cli update
# update wordpress core
wp core update
# update all the plugins
wp plugin update --all
# update all the themes
wp theme update --all
# export database (to directory)
wp db export /home/andy/wp.sql
# install plugin
wp plugin install wp-statistics
# activate plugin
wp plugin activate wp-statistics
you can see extensive documentation here
It is also possible to extend the wp-cli tool and I have added these 2 tools as well which provide background as to potential problems.
https://github.com/javiercasares/wpvulnerability
all round it is a great tool to manage the wordpress site from the command line.
]]>WordPress, with its user-friendly interface and endless possibilities, has become the platform of choice for many website owners and developers. When it comes to creating a unique, tailor-made website, customizing a WordPress theme is the way to go. One of the best starting points for this journey is the Underscores theme template (also known as "_s"). In this guide, we'll explore how to leverage Underscores as a solid foundation for your WordPress project.
Underscores is a lightweight, minimalistic starter theme developed by Automattic, the company behind WordPress. It's designed to give you a clean slate for building your custom WordPress theme. Here are some of the key features that make Underscores an excellent choice:
Bare Minimum: Underscores provides just the essentials - the basic structure, template files, and a well-organized directory structure. This makes it easier to understand and build upon.
Mobile-First: The theme is designed with a mobile-first approach, ensuring that your site will look and perform well on all devices.
Developer-Friendly: Underscores is developer-focused, which means it's well-documented and designed to be extended and customized to your heart's content.
Accessibility-Ready: It places a strong emphasis on web accessibility, ensuring your site is usable for all visitors, regardless of disabilities.
Before you start customizing, ensure you have a development environment set up. This typically involves installing a local server (like XAMPP or MAMP) and setting up a fresh WordPress installation.
Now comes the fun part: making your Underscores theme truly yours.
Familiarize yourself with the theme's file structure. Underscores keeps things tidy and logical. Key files to look at include style.css
, functions.php
, and various template files.
style.css
Customize the theme's styles by editing the style.css
file. This is where you'll set colors, fonts, and other design elements to match your brand.
functions.php
The functions.php
file is your gateway to adding functionality to your theme. You can enqueue styles and scripts, add custom widgets, and more. This is also where you can set up support for features like post thumbnails, custom headers, and custom backgrounds.
Underscores provides template files for various parts of your site, like header.php
, footer.php
, and more. Customize these templates to match your design and layout requirements.
You can extend your theme's functionality by adding custom features using hooks, filters, or by creating custom functions in your functions.php
file.
If your project requires specific features that aren't related to your theme's core functionality, consider installing WordPress plugins. They can add contact forms, SEO enhancements, e-commerce capabilities, and more without cluttering your theme.
After making all your customizations, thoroughly test your website to ensure everything works as expected. Test on various browsers and devices to check for any issues. Once you're satisfied, you can deploy your custom theme to your live WordPress site.
Customizing a WordPress theme using Underscores is an excellent way to create a unique, professionally designed website that reflects your brand and style. It gives you full control over your site's look and functionality while keeping things lightweight and efficient.
Remember, practice makes perfect. Don't be afraid to experiment and learn as you go. The WordPress community is full of resources and helpful individuals who can assist you on your journey to creating the perfect WordPress website. So, get started with Underscores, and let your creativity flow as you craft a website that's truly one-of-a-kind.
A WordPress theme consists of a variety of files that work together to control the appearance and functionality of your website. Here's a list of common file types found in a typical WordPress theme and their purposes:
style.css: This is the main stylesheet file for your theme. It contains CSS code that defines the design and layout of your website.
index.php: The primary template file used to display the main content of your website. It usually contains a loop that fetches and displays blog posts or other content.
header.php: This file typically contains the header section of your website, including the site title, navigation menus, and any global elements that appear at the top of every page.
footer.php: Similar to the header.php file, this contains the footer section of your website, including copyright information and any global elements that appear at the bottom of every page.
single.php: This template file is used to display individual blog posts or custom post types.
page.php: Used to display individual pages on your WordPress site.
archive.php: It's used to display archive pages, such as category or tag archives.
search.php: Displays search results when visitors use the search functionality on your website.
category.php: Used to display category archive pages.
tag.php: Used to display tag archive pages.
author.php: Displays author archive pages, showing posts by a specific author.
comments.php: Contains the code for displaying comments on your website. It is often included in single.php and page.php.
functions.php: This is where you can define custom functions and include additional functionality for your theme, such as registering sidebars or adding custom scripts.
header.php: Typically contains the HTML code for the header section of your website.
footer.php: Contains the HTML code for the footer section of your website.
sidebar.php: Defines the content of your theme's sidebar, which may include widgets, advertisements, or other custom content.
image files (e.g., .jpg, .png, .svg): These files are used for images and graphics within your theme, such as logos, background images, or other visual elements.
JavaScript files (e.g., .js): These files are used to add interactivity and functionality to your theme, like sliders, navigation menus, or other dynamic features.
Template part files (e.g., content.php, post-formats.php): These files are used to break down the structure of your theme into smaller, reusable components that can be included in other template files.
Custom template files (e.g., custom-template.php): Themes may include custom template files for specific purposes or post types.
style.scss: Some themes use SCSS (Sass) for their stylesheets. This is the preprocessed version of the main CSS file.
README.txt or documentation files: Themes often include documentation to explain how to use and customize the theme.
These are some of the most common files you'll find in a WordPress theme. The specific files and their organization may vary from theme to theme, especially if you're using a custom or premium theme. It's important to be familiar with these files if you want to customize or develop your WordPress theme.
In WordPress, the template hierarchy determines how the system chooses which template file to use when displaying different types of content. This hierarchy allows you to create custom templates for specific pages or content types while falling back on more generic templates when needed. Here's an overview of the WordPress template file hierarchy, from the most specific to the most general:
Custom Page Template: The most specific template is a custom page template. If you create a custom template for a specific page (e.g., a "Template Name: Custom Template" comment in the file header), WordPress will use this template for that page. Custom templates take precedence over all other template files.
Custom Post Type Template: If you have custom post types on your site, WordPress will look for a template specifically created for that post type. For example, if you have a custom post type called "portfolio," WordPress will first check for single-portfolio.php
before using the generic single post template.
Single Post Template: If no custom post type template is found, WordPress uses the single.php
file to display single posts.
Single Page Template: If you're viewing a single page, WordPress will use the page.php
template.
Category Archive Template: When viewing a category archive page, WordPress looks for category-slug.php
or category-ID.php
. If those are not found, it uses category.php
.
Tag Archive Template: For tag archive pages, WordPress searches for tag-slug.php
or tag-ID.php
. If not found, it uses tag.php
.
Author Archive Template: When you view an author's archive page, WordPress looks for author-nicename.php
. If not found, it uses author-ID.php
.
Date-Based Archive Template: Date-based archives have different hierarchy levels, with the most specific being date.php
, followed by year.php
, month.php
, and day.php
. These templates handle yearly, monthly, and daily archives, respectively.
Custom Taxonomy Template: If you have custom taxonomies, WordPress looks for taxonomy-taxonomyname.php
. For example, if you have a custom taxonomy called "genre," WordPress will search for taxonomy-genre.php
.
Custom Post Type Archive Template: When viewing the archive for a custom post type, WordPress looks for archive-posttype.php
. If not found, it defaults to archive.php
.
Search Results Template: The template for search results is search.php
.
404 Error Template: The template for handling 404 (not found) errors is 404.php
.
Attachment Template: For individual attachments (such as images or documents), WordPress uses attachment.php
.
Home Page Template: The default template for your site's homepage is home.php
. If it doesn't exist, WordPress falls back to index.php
.
Front Page Template: If you've set a static front page for your website, WordPress uses front-page.php
for that page. If not found, it uses home.php
.
Generic Index Template: If no other specific template is found, WordPress will use index.php
to display the content. This template acts as a fallback for all types of content.
By understanding the WordPress template hierarchy, you can create and customize templates to control the look and functionality of various parts of your website, tailoring them to your specific needs. This hierarchy ensures that WordPress always selects the most appropriate template based on the content being displayed.
]]>Today, I had a little investigation and discovered a tweet which suggested it was the 'Listen live in Spaces' part which was maxing the CPU. The Twitter CPU Optimizer solved this problem and removed the Spaces html.
]]>Wordpress has gone through a lot of changes in the past few months and one thing I had to get my head around was a totally new editor.
But the users of the site find the new editor difficult to use so asked me to reinstall the old 'classic' editor.
But first I had to download the site and the database and install it locally.
One thing that I had to workaround was the site uses https redirects which wernt available on my localhost/127.0.0.1 so these are the settings i had to alter to get it to work.
ini_set('display_errors', 0);
define('WP_CACHE', false);
define('WPCACHEHOME', '/home/andy/www/wordpress/wp-content/plugins/wp-super-cache/' );
define('DB_NAME', 'wp1');
define('WP_DEBUG', false);
define('FS_METHOD', 'direct' );
define('FORCE_SSL_ADMIN', false);
also in the database are 2 settings which I needed to change to get the site to work
siteurl http://127.0.0.1:86
home http://127.0.0.1:86
Once these were set, the site worked, but it was a struggle to work this out.
Next up, updating all the plugins, finding a new theme and disabling unused plugins. Also learning the difference with Elementor, Gutenberg and other things that have changed.
]]>Firstly, I use Wappalyzer to see what technologies the site uses and also take a look at the source code to see what state it is in. I come from a PHP Laravel background, but like to see what technologies a website is run on.
I also like to use tools to quantify how good a websites is. In days gone past, there was Alexa ranking, Google Pagerank and a number of other ranking tools that no longer work or exist in the same way. Now we have Domain Authority, but other tools have stepped in to replace them.
A great tool I have recently discovered is the Laravel SEO Scanner from a dutch team called VormKracht. This is an PHP Laravel Artisan comman which scans a website for a number of issues with SEO. I like it a lot.
A nice allround tool for analysing a website is rankwatch.com/tools/web-analyzer. This gives a fairly comprenhsive summary of the key SEO criteria.
This tool from moz.com/domain-analysis provides a simple value as a domain authority.
Check the sitecheck.sucuri.net for malware and if a domain is listed in 9 security websites.
And check the pagerank for a url. dnschecker.org/pagerank
This is a good tool for finding keyword related positions for a url. ahrefs.com/keyword-rank-checker
I also like to use command line tools and a great one is whatweb which will show the technologies a website uses in the command line.
This is a set of tools I use alongside looking at the source code and my own londinium scoring system.
What else do you use? I would like to edit this and add more tools, please comment if you use and like any.
]]>It is always a bit nervous thing to do, as in the past, I found the upgrade to linux kernel 4.19 stopped the computer from sleeping, so after a full backup, I went for it.
I started with the full upgrade
apt-get update
apt-get dist-upgrade
This updated the system, but I wanted to also fix a few errors that I had. eg. My copy of Zeal, which is an offline code reader was showing blank pages. So I added the unstable debian repo to provide newer software versions.
# Unstable repo main, contrib and non-free branches, no security updates here
deb http://http.us.debian.org/debian unstable main non-free contrib
deb-src http://http.us.debian.org/debian unstable main non-free contrib
In doing so, I ended up with errors which complained about version conflicts for a number of packages. These where android packages that I had installed and I forced there removal using these commands as they blocked the upgrade process.
dpkg --remove --force-remove-reinstreq adb
dpkg --remove --force-remove-reinstreq android-libadb
dpkg --remove --force-remove-reinstreq android-libbase
dpkg --remove --force-remove-reinstreq android-libcutils
dpkg --remove --force-remove-reinstreq android-libbase
So this gave me a much more up-to-date system with the version 6 kernel.
]]>To install and setup GO on my debian machine was straightforward but the version of Go in my Debian repository was 1.15, where as version 1.19 is available on https://go.dev/dl/
apt-get install -y golang
edit ~/.bashrc
# Golang paths in bashrc
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
so my go files go in /home/andy/go/src
]]>The first thing I did was to look into slow query logging, so editing my.cnf file with a text editor and add the following block of code under the mysqld section:
slow_query_log = 1
slow-query_log_file = /var/log/mysql-slow.log
long_query_time = 2
To restart the mariadb server
sudo systemctl restart mariadb.service
However, after monitoring it for a while, it never logged any slow queries. This got me thinking it was something with the MYsql configuration.
So secondly I installed 'mysqltuner' and it suggested the following optimisations in the my.cnf file.
running
mysqltuner --host 127.0.0.1 --port 3306 --user root --pass 'password'
gave good feedback and suggestions like so:
Variables to adjust:
query_cache_size (=0)
query_cache_type (=0)
query_cache_limit (> 1M, or use smaller result sets)
tmp_table_size (> 16M)
max_heap_table_size (> 16M)
performance_schema = ON enable PFS
innodb_buffer_pool_size (>= 765.8M) if possible.
innodb_log_file_size should be (=16M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
I also checked the database tables with
mysqlcheck -u root -p --check --all-databases
and also I checked the logs in for obvious errors.
/var/log/mysql/error.log
/var/log/apache/error.log
I also added Indexes and Primary keys on the tables, but although this helped a little, it didnt fix the problem.
Finally, I wanted to get live performance stats for the queries, so I installed the laravel debugbar and this gave me the answer. The following query was taking over 10 seconds!
$data['website'] = websites::where('updated_at', '=', '2022-12-12 12:34:44')
->where('dns', '!=', 'failed')
->inRandomOrder()
->take(1)
->first();
The inRandomOrder() I had added was slowing it down dramatically, and once i removed it, the query was back to running in 25 MILLIseconds!
I have found and am using a number of laravel packages in recent days that solve small problems and improve my project. I will write about each one in turn.
]]>I wrote a blog previously about upgrading to PHP 8.0 and also using ansible to upgrade to PHP version 8.1 only a few months back.
Today I decided to upgrade to PHP 8.2 and it was a fairly similar and simple process. Using this apt
install to install a comprenhsive set of PHP modules to fulfil the requirements of Laravel, this should cover most peoples needs.
sudo apt install php8.2-cli php8.2-curl php8.2-mysqlnd php8.2-gd php8.2-opcache
php8.2-zip php8.2-intl php8.2-common php8.2-bcmath php8.2-imap php8.2-imagick
php8.2-xmlrpc php8.2-readline php8.2-memcached php8.2-redis php8.2-mbstring
php8.2-apcu php8.2-xml php8.2-xml php8.2-redis php8.2-memcached php8.2-memcache
php8.2 php8.2-fpm php8.2-cli php8.2-fpm php8.2-common libapache2-mod-fcgid
php8.2-cli php8.2-bz2 php8.2-tidy libapache2-mod-php8.2 php8.2-fpm -y
To activate the new version (or select an older version) the command 'update-alternatives' can be used thus:
sudo update-alternatives --config php
which provides a menu of versions like so:
There are 5 choices for the alternative php (providing /usr/bin/php).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/bin/php.default 100 auto mode
1 /usr/bin/php.default 100 manual mode
2 /usr/bin/php7.4 74 manual mode
3 /usr/bin/php8.0 80 manual mode
* 4 /usr/bin/php8.1 81 manual mode
5 /usr/bin/php8.2 82 manual mode
Press <enter> to keep the current choice[*], or type selection number: 5
I have kept the older versions for now and to setup Apache to use the new version, these commands update the Apache config (this is for both the fpm and apache module)
sudo a2enconf php8.2-fpm
sudo a2disconf php8.1-fpm
sudo a2enmod php8.2
sudo a2dismod php8.1
sudo service apache2 restart
And with the apache restart, I am using the latest and greatest version of PHP!
Now to test that my apps work and see what new error messages have been added.
]]>curl -O https://dl.typesense.org/releases/0.24.0/typesense-server-0.24.0-amd64.deb
sudo apt install ./typesense-server-0.24.0-amd64.deb
check the status:
sudo systemctl status typesense-server.service
and do a healthcheck
curl http://localhost:8108/health
all ok!
One suggestion I saw to improve the typesense database import, is to give each entry a unique id, to make updating easier. So I have added an id field like so to the database query
nwr
->selectRaw("CONCAT( points.nwr, '/', points.id) AS id)
londinium
->selectRaw(" CONCAT( seo.route) AS id")
]]>The PHP function parse_url is a good starting point for extracting domain names but I was looking for getting the top level domain too, which is trickier.
If you split the domain by the dots from the right hand side, '.com', '.org', '.net' are all single levels. '.co.uk', '.me.uk', 'gov.uk are 2 levels from the right. but there are also domains like '.homeoffice.gov.uk' and the longest I found was 'nodes.k8s.nl-ams.scw.cloud'. So I need to find another way to extract this information.
I discovered the repo github.com/jeremykendall/php-domain-parser on github which seems to be the best way in php to extract this from a full domain name. It uses https://github.com/publicsuffix/list this project as a source of truth for the domain registrar and works great.
One little gotcha I encountered, was it needed the php-intl extension which I didnt have, so it installed a very old version and didnt work. Once i installed it, I had to force composer to upgrade the package by changing the version in the composer.json from version 1.4 to
"jeremykendall/php-domain-parser": "^6.1",
Here is an extract of the code I used to get the suffix and the domain from a url:
use Pdp\Rules;
use Pdp\Domain;
$publicSuffixList = Rules::fromPath('public_suffix_list.dat');
$url = 'www.pref.okinawa.jp'; // <- any url with the http part removed
if (inet_pton($url)==false and strpos($url, ".")) { // check for ip addresses and contains a dot
$domain = Domain::fromIDNA2008($url);
$result = $publicSuffixList->resolve($domain);
echo $result->registrableDomain()->toString(); //display 'pref.okinawa.jp';
echo $result->suffix()->toString(); //display 'jp';
}
The php function inet_pton($u)
checks if the variable $url is an ip address, which is a 'feature' of the database I am checking. So removes the check on ip address like '1.2.3.4' as that errors. I am also checking that the domain has at least one '.' in it.
I wrote another post recently on looking up the DNS status of a domain which is a useful and quick initial test of a domain. This is the function it uses dns-get-record.php
Another cool domain related code I stumbled upon is extracting the domain in a mysql select query like so:
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX(website, '/', 3), '://', -1), '/', 1), '?', 1) AS domain FROM domains;
So now I can use this to create a master list of domains for my project and remove those from domains I do not want.
]]>I wanted to make something similar and some of my favourite Awesome pages include:
I think I am going to start my own one, with my favourite links to things i like, use and regularly visit.
Its going to be called
And i am going to start it on my github profile frontpage
]]>SEO is something I have done a lot of in the past, but it has changed a lot in recent years. I wanted to write about the tools and techniques I have had sucess with.
My current favourite keyword research tool is https://ahrefs.com/keyword-generator as it provides the top 100 key phrases and is up-to-date. Alongside performing searches on Google, It is easy to find what people are searching for, with regards to a phrase.
The optimium length of the Meta Description tag is around 150 characters and this if often displayed as the summary of a webpage in search engines. Therefore it is vital to optimise the description tag with user search friendly keywords, which still reads well.
The title tag is also a very important page ranking factor. It is often the actual link of the page in the search results. I have seen recomendations that say it should be about 60 characters long.
]]>To test it out, i added about 700 websites urls I know have tiktok links. I added the function, then pushed it live to github. Then I did a composer update and it updated as if like magic!
I added the code to the spider like this sending in the list of links found on the page.
echo $smle->getTiktok($linkArray);
Here is a list of the unusual ones found in the list of 700+ websites i checked, the top lot are just http and botton ones https, with the vast majority in this format https://www.tiktok.com/@username those without the www. redirect the www. version of tiktok.
http://tiktok.com/@frontlinerecruitment
http://tiktok.com/@mycarboncoach
http://tiktok.com/@nesmuseum
http://tiktok.com/@suffolknewcollege
http://tiktok.com/@thewheelspecialist
http://tiktok.com/@virginutty
http://vm.tiktok.com/gcHFxw/
http://www.tiktok.com/@peakwildlifepark
https://tiktok.com/@flightclubdarts
https://tiktok.com/@intersportelverys
https://tiktok.com/@skateeastanglia
https://tiktok.com/@uniofeastanglia
https://tiktok.com/@wovendurham
https://vm.tiktok.com/cCXD3U
https://vm.tiktok.com/ZMe6rQbhe/
https://vm.tiktok.com/ZMekeLrLn/
https://vm.tiktok.com/ZMeMC59yM/
https://vm.tiktok.com/ZML7dohMg/
https://vm.tiktok.com/ZMLfy3ckD
https://vm.tiktok.com/ZMLh8vpwB/
https://vm.tiktok.com/ZMN82frh1/
https://vm.tiktok.com/ZMNrLa72g/
https://vm.tiktok.com/ZMR7XdHPy/
https://vm.tiktok.com/ZMRSK4B2S/
https://vm.tiktok.com/ZSuD2T54
https://www.tiktok.com/@_bnuni
https://www.tiktok.com/@zatugames
https://www.tiktok.com/discover/murphymachinery
https://www.tiktok.com/discover/pizzaexpress?lang=en
https://www.tiktok.com/discover/swg3-glasgow?lang=en
https://www.tiktok.com/discover/the-belfry-hotel
https://www.tiktok.com/en/
https://www.tiktok.com/legal/page/eea/privacy-policy/en
https://www.tiktok.com/tag/bristolsu/
I also found one had a tracking link like this:
https://www.tiktok.com/@username?_ga=2.225605338.1933392382.1646055423-1170747732.1642770380&_gac=1.241822390.1645437651.EAIaIQobChMI8f2o8sSQ9gIVQoxoCR2W0gU3EAAYASAAEgK3g_D_BwE
Investigating these, the 'vm.' links redirect to the homepage so arn't much use. I wonder what these are about...
The http ones redirect to the https site, the '/discover' and '/tag' links are both useful search type queries within the site.
I dont use tiktok myself, but I can see it growing in use on websites. I may add links to Digg, Delicious, foursquare, wikipedia, reddit links next. Is there a social media website you use that you think would be useful in this package? tell me in the comments....
Hope you find this interesting, useful and have learnt something about tiktok here.
]]>This is a second part to build on my first blog about Setting-Up-A-Typesense-Website-Search-Engine/
Typesense uses the jsonl format which is lines of json separated with the '\n' line break (note: the PHP_EOL function doesnt do this, it does '\r\n' for end of lines. This was my first gotcha!
The second Gotcha, was that the jsonl file had to match the names of the fields in the schema. I had an id field, that I didnt want to import, but that meant the import command ignored every row with extra fields in the jsonl file. So I removed the 2 fields I didn't need and indexed over 150,000 lines of json in a few minutes.
This is the file i used to import the json file.
<?php
require_once __DIR__ . '/vendor/autoload.php';
use Typesense\Client;
// setup the schema
$websiteSchema = [
'name' => 'websites',
'fields' => [
['name' => 'title', 'type' => 'string'],
['name' => 'description', 'type' => 'string', 'facet' => true],
['name' => 'website', 'type' => 'string'],
['name' => 'url', 'type' => 'string']
]
];
$client = new Client(
[
'api_key' => 'API_KEY_HERE',
'nodes' => [
[
'host' => 'localhost', // For Typesense Cloud use xxx.a1.typesense.net
'port' => '8108', // For Typesense Cloud use 443
'protocol' => 'http', // For Typesense Cloud use https
],
],
'connection_timeout_seconds' => 2,
]
);
// delete the schema
$client->collections['websites']->delete();
// create the new schema from above in typesense
$client->collections->create($websiteSchema);
// import json file of data
$websitesData = file_get_contents('websites.jsonl');
$client->collections['websites']->documents->import($websitesData, ['action' => 'create']);
// show the result
$result = $client->collections['websites']->retrieve();
var_dump($result);
This is the line that shows me the count of pages indexed and that it had worked!
string(8) "websites"
["num_documents"]=>
int(157788)
I made a basic controller to respond to queries in the url using laravel and a query as a GET string in the url like this http://localhost/search?q=chiswick&page=2 and here is the php code
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Typesense\Client;
class SearchController extends Controller
{
public function index(Request $request)
{
echo "Search for ";
$q = $request['q'];
$page = $request['page'];
if (!isset($page)) {
$page=1;
}
echo $q ."<br> page". $page;
$client = new Client(
[
'api_key' => 'API_KEY_HERE',
'nodes' => [
[
'host' => 'localhost', // For Typesense Cloud use xxx.a1.typesense.net
'port' => '8108', // For Typesense Cloud use 443
'protocol' => 'http', // For Typesense Cloud use https
],
],
'connection_timeout_seconds' => 2,
]
);
$searchParameters = [
'q' => $q,
'query_by' => 'description, title',
'page' => $page,
'per_page' => 50
];
$results = $client->collections['websites']->documents->search($searchParameters);
echo "<hr>found". $results['found'];
foreach ($results['hits'] as $row) {
// var_dump($row);
echo $row['document']['url'];
echo "<br>";
echo $row['document']['website'];
echo "<br>";
echo $row['document']['title'];
echo "<br>";
echo $row['document']['description'];
echo "<hr>";
}
}
}
and this creates a page like this:
And this is great for today. I am now going to add more columns of data from the database into the json file and import it.
Also going to look at making imports for other pages in the website, ranking these results and improving this all further.
Very exciting, as it hasnt taken very long to get a working version up and running.
]]>It seemed to be caused by the mariadb and redis servers, which I killed with the commands
top
systemctl stop mariadb
systemctl stop redis
If you force the machine to turn off, it risks rebooting into an error and having to run 'fsck' to fix it, which isnt something I want to do, so I wanted to document how to turn it off without holding down the power key.
The first option I use is to login to a virtual console. You can get to these with Ctrl
+Alt
+F1
to F6. Ctrl+Alt+F7
is the console where your X server is running, So to get back into your GUI window manager: type:
Ctrl+Alt+F7 (or sometimes Alt+F7 or Ctrl+Alt+F8)
now in a terminal, i can login as the root user and shutdown the machine with
shutdown now
When you are in these 6 consoles you can press Alt+RightArrow or Alt+LeftArrow to move to next/previous console respectively. I think you can also return to it with the command 'chvt 7'
In this console, you can often login as root and type the command 'top' to see what is causing the problem, Mine is often the mysql db on this machine going beserk and maxing out the CPU. You can often type 'shutdown now' to naturally turn off the machine.
If this doesnt work, the next option is the REISUB – R E I S U B key strokes. Here you press ALT + PrintScreen + R
then wait a few seconds, followed by ALT + PrintScreen + E, ALT + PrintScreen + I, ALT + PrintScreen + S, ALT + PrintScreen + U and finally ALT + PrintScreen + B will reboot the machine as if you had held the power button down for 6-10 seconds.
Each key does a task as mentioned below
unRaw (take control of keyboard back from X),
tErminate (send SIGTERM to all processes, allowing them to terminate gracefully),
kIll (send SIGKILL to all processes, forcing them to terminate immediately),
Sync (flush data to disk),
Unmount (remount all filesystems read-only),
reBoot.
So with a combination of these commands and key strokes, I have managed to shutdown the machine in a more gentle way.
Hope it helps.
]]>I installed it locally with these 2 commands
curl -O https://dl.typesense.org/releases/0.23.1/typesense-server-0.23.1-amd64.deb
sudo apt install ./typesense-server-0.23.1-amd64.deb
Now I can run the typesense server thus:
typesense-server --data-dir=/var/lib/typesense --api-key=API_KEY_HERE
The API_KEY is created in the /etc/typesense/typesense-server.ini
file
and to check the situation with this command.
systemctl status typesense-server.service
resulting in:
● typesense-server.service - Typesense Server
Loaded: loaded (/etc/systemd/system/typesense-server.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-12-01 05:21:39 GMT; 7h ago
Docs: https://typesense.org
Main PID: 1096 (typesense-serve)
Tasks: 91 (limit: 13630)
Memory: 89.7M
CPU: 46.755s
CGroup: /system.slice/typesense-server.service
└─1096 /usr/bin/typesense-server --config=/etc/typesense/typesense-server.ini
systemd[1]: Started Typesense Server.
typesense-server[1096]: Log directory is configured as: /var/log/typesense
typesense-server[1096]: E20221201 05:22:17.530272 1778 raft_server.h:62] Peer refresh failed, error: Doing another configuration change
and to run a status check.
curl http://localhost:8108/health
{"ok":true}
Using the php version of the setup, i am going to create a simple json for websites
<?php
$websiteSchema = [
'name' => 'websites',
'fields' => [
['name' => 'url', 'type' => 'string'],
['name' => 'title', 'type' => 'string'],
['name' => 'description', 'type' => 'string[]', 'facet' => true]
],
'default_sorting_field' => 'url'
];
$client->collections->create($websiteSchema);
import some data with this json file. Jsonl is a form of json with multiple lines of json. this article explains
This is an example of 1 row in the json.
{"id":"100275575","nwr":"way","title":"Home | Dover Castle Hostel","description":"Dover Castle, The Dover Castle Hostel is in the perfect location to explore London for budget travellers and groups. Being so close to the heart of London just check in at Dover Castle and enjoy the lively atmosphere with the friendly staff","url":"/way/100275575"}
And to import it
$websitesData = file_get_contents('websites.jsonl');
$client->collections['websites']->documents->import($websitesData);
Now to test it, here is a command line curl command
curl -H "X-TYPESENSE-API-KEY: API_KEY_HERE" \
"http://localhost:8108/collections/websites/documents/search\
?q=dover&query_by=description"
Which returns this json result
{"facet_counts":[],"found":1,"hits":[{"document":{"description":"Dover Castle, The Dover Castle Hostel is in the perfect location to explore London for budget travellers and groups. Being so close to the heart of London just check in at Dover Castle and enjoy the lively atmosphere with the friendly staff","id":"100275575","nwr":"way","title":"Home | Dover Castle Hostel","url":"/way/100275575"},"highlights":[{"field":"description","matched_tokens":["Dover","Dover"],"snippet":"<mark>Dover</mark> Castle, The <mark>Dover</mark> Castle"}],"text_match":72341265420648449}],"out_of":100,"page":1,"request_params":{"collection_name":"websites","per_page":10,"q":"dover"},"search_cutoff":false,"search_time_ms":304}
and in laravel this is the controller
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Typesense\Client;
class SearchController extends Controller
{
public function index(Request $request)
{
echo "search for ";
$q = $request->input('q');
echo $q;
$client = new Client(
[
'api_key' => 'API_KEY_HERE',
'nodes' => [
[
'host' => 'localhost', // For Typesense Cloud use xxx.a1.typesense.net
'port' => '8108', // For Typesense Cloud use 443
'protocol' => 'http', // For Typesense Cloud use https
],
],
'connection_timeout_seconds' => 2,
]
);
$searchParameters = [
'q' => $q,
'query_by' => 'description'
];
$result = $client->collections['websites']->documents->search($searchParameters);
dd($result);
}
}
which results in
array:8 [▼ // app/Http/Controllers/SearchController.php:37
"facet_counts" => []
"found" => 13
"hits" => array:10 [▼
0 => array:3 [▼
"document" => array:5 [▶]
"highlights" => array:1 [▶]
"text_match" => 72341265420648449
]
1 => array:3 [▼
"document" => array:5 [▼
"description" => "Dover Castle, The Dover Castle Hostel is in the perfect location to explore London for budget travellers and groups. Being so close to the heart of London just ▶"
"id" => "100275575"
"nwr" => "way"
"title" => "Home | Dover Castle Hostel"
"url" => "/way/100275575"
]
"highlights" => array:1 [▼
0 => array:3 [▼
"field" => "description"
"matched_tokens" => array:1 [▶]
"snippet" => "perfect location to explore <mark>London</mark> for budget travellers and"
]
]
"text_match" => 72341265420648449
]
2 => array:3 [▶]
3 => array:3 [▶]
4 => array:3 [▶]
5 => array:3 [▶]
6 => array:3 [▶]
7 => array:3 [▶]
8 => array:3 [▶]
9 => array:3 [▶]
]
"out_of" => 100
"page" => 1
"request_params" => array:3 [▼
"collection_name" => "websites"
"per_page" => 10
"q" => "london"
]
"search_cutoff" => false
"search_time_ms" => 415
]
This is a great start, I am going to publish this and explore the options for displaying this resulting search json.
Questions/Ideas:
if anyone has any suggestions, links or tips, please comment below or via twitter
]]>Firstly I changed a few settings to work with Lambda. You can only write to the /tmp directory, so it is necessary to change these in the .env file. Following this tutorial
CACHE_DRIVER =array
VIEW_COMPILED_PATH =/tmp/storage/framework/views
SESSION_DRIVER =array
LOG_CHANNEL =stderr
pasted the following into app/Providers/AppServiceProvider.php
public function boot()
{
// Make sure the directory for compiled views exist
if (! is_dir(config('view.compiled'))) {
mkdir(config('view.compiled'), 0755, true);
}
}
and run
php artisan config:cache
php artisan config:clear
Next up I wanted to access a mysql db which is in an EC2 instance on AWS.
Update the mysql config in the /etc/mysql directory to allow external ips like so:
sudo nano 50-server.cnf
bind-address = 0.0.0.0
you can alternatively bind to multiple addresses thus:
bind-address = 10.0.0.1,10.0.1.1,10.0.2.1
open the mysql db in the firewall
iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT
add a user to the mysql config
GRANT ALL ON database_name.* TO user@ IDENTIFIED BY 'password';
And most importantly, adding port 3306 for the mysql db to the AWS security group for the EC2 instance
Testing the connection with
mysql -u user -h ec2ipAddress -p
so now I have a lambda laravel repo which can connect to an external database.
]]>One idea to improve the speed and the number of websites I can index visit is by using serverless technology. This way I could create a server instance for each website spidering sessions and queue them all at once with AWS Lambda.
Using the AWS free tier, you can make 1 million request a month and 400,000 GB seconds of compute time per month, sounds like a lot!
So I started with a normal laravel install
composer create-project laravel/laravel aws-bref-serverless-lambda
cd aws-bref-serverless-lambda/
chmod -R 777 storage/
I have watched a number of videos to learn how to do this and this was the best to start with. It is made by @matthieunapoli
composer require bref/laravel-bridge --update-with-dependencies
php artisan vendor:publish --tag=serverless-config
It is made by twitter.com/goserverless/
npm install -g serverless
by running the command 'serverless' I was able to configure serverless with the secret and access keys for AWS and deploy easily.
$ serverless
? No AWS credentials found, what credentials do you want to use? Local AWS Access Keys
? Do you have an AWS account? Yes
If your browser does not open automatically, please open this URL:
https://console.aws.amazon.com/iam/home?region=us-east-1#/users$new?step=final&accessKey&userNames=serverless&permissionType=policies&policies=arn:aws:iam::aws:policy%2FAdministratorAccess
? In your AWS account, create an AWS user with access keys. Then press [Enter] to continue.
? AWS Access Key Id: 'key pasted here'
? AWS Secret Access Key: 'secret access key pasted here'
✔ AWS credentials saved on your machine at "~/.aws/credentials". Go there to change them at any time.
? Do you want to deploy now? Yes
Deploying laravel to stage dev (us-east-1)
✔ Service deployed to stack laravel-dev (197s)
endpoint: ANY - https://aaaaaaaaa.execute-api.us-east-1.amazonaws.com
functions:
web: laravel-dev-web (30 MB)
artisan: laravel-dev-artisan (30 MB)
What next?
Run these commands in the project directory:
serverless deploy Deploy changes
serverless info View deployed endpoints and resources
serverless invoke Invoke deployed functions
serverless --help Discover more commands
1 deprecation found: run 'serverless doctor' for more details
And it was live!
so i deleted it with
serverless remove
To deploy it again I ran
serverless deploy
and to run the php artisan cli command and it worked!
$ vendor/bin/bref cli laravel-dev-artisan inspire
“ Happiness is not something readymade. It comes from your own actions. ”
— Dalai Lama
In part 2 i will add some production level code to connect to a mysql database.
]]>Here it is:
]]>dns_get_record
, however this isnt very reliable as it often caused an error. The '@' sign is to prevent errors, but there are known bugs with this function. The DNS_ALL option does seem to help, but it is a lot slower than just the A record look up.
$result = @dns_get_record($domain, DNS_A);
So I decided to use the Linux Command Line tool dig which has proved much more reliable.
try {
$result = shell_exec("dig +short ". $domain . ' A');
} catch (Exception $e) {
break;
}
if (isset($result)) {
echo " exists" . PHP_EOL;
} else {
echo "NO dns: " . $domain. PHP_EOL;
}
This way I can check domains quickly and efficiently and mark those that dont work without wasting time trying to index a website which hasnt even got a DNS entry.
]]>So I came up with the idea of using a terminal browser links2 to load the page and using screen to detach from the ssh terminal window so it can run in the background.
I installed both with the lovely apt command on the debian box thus:
sudo apt install screen links2
and ran it with the '-html-auto-refresh' to allow html refreshes to work.
screen
links2 -html-auto-refresh 1 http://localhost/runProcess
Screen allows me to run the process in the background and disconnect leaving it running. To leave the process I use the key combination
ctrl a ctrl d
then to return to the window type
screen -r
The advantage is I can log off from the ssh session and reconnect from another computer and return to it. Which also means it can run in the background and you can use the terminal to do other things.
]]>These are the error codes and the counts for these errors
HTTP Error | Count |
---|---|
401 | 13 |
402 | 6 |
403 | 1274 |
404 | 2512 |
406 | 1 |
409 | 9 |
410 | 13 |
423 | 2 |
426 | 1 |
429 | 15 |
453 | 1 |
500 | 131 |
502 | 7 |
503 | 165 |
523 | 1 |
526 | 2 |
530 | 4 |
connection exception | 3558 |
request exception | 400 |
Here is a list explaining the Error Codes
The spider uses the Guzzle and works reasonably well.
My first attempt to fix this was to add the following headers to the Guzzle Client, but this didnt help much
$jar = new \GuzzleHttp\Cookie\CookieJar();
$client = new \GuzzleHttp\Client(
[
'cookies' => $jar,
'timeout' => 8.0,
'http_errors' => false,
'base_uri' => $url,
'referer' => 'http://www.google.com/',
'allow_redirects' => ['strict' => true],
'Accept-Encoding' => 'gzip, deflate, br',
'Accept-Language' => 'en-GB,en-US;q=0.9,en;q=0.8',
'Accept' => 'text/html',
'User-Agent' => 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36'
]
);
My second attempt was to use Laravel Dusk and nunomaduro's laravel-console-dusk but this had the same problem.
//the cookie file name
$cookie_file = 'cookies.txt';
//create the driver
$process = (new ChromeProcess())->toProcess();
$process->start();
$options = (new ChromeOptions())->addArguments(['--disable-gpu','--enable-file-cookies','--no-sandbox', '--headless']);
$capabilities = DesiredCapabilities::chrome()->setCapability(ChromeOptions::CAPABILITY, $options);
$driver = retry(5, function () use ($capabilities) {
return RemoteWebDriver::create('http://localhost:9515', $capabilities);
}, 50);
$this->browse(function ($browser) use ($id, $nwr, $url) {
$browser->visit($url)
->pause(5)
My third attempt was to use Curl and it worked like magic
$ch=curl_init("$url");
curl_setopt_array($ch, array(
CURLOPT_USERAGENT=>'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0',
CURLOPT_ENCODING=>'gzip, deflate',
CURLOPT_HTTPHEADER=>array(
'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language: en-US,en;q=0.5',
'Accept-Encoding: gzip, deflate',
'Connection: keep-alive',
'Upgrade-Insecure-Requests: 1',
),
));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_MAXREDIRS, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 6);
curl_setopt($ch, CURLOPT_TIMEOUT, 6);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
// curl_setopt($ch, CURLOPT_VERBOSE, true);
$htmlORIG=curl_exec($ch);
if (curl_errno($ch)) {
print "CURL ERROR: " . curl_error($ch);
} else {
curl_close($ch);
}
$http_status = curl_getinfo($ch, CURLINFO_HTTP_CODE);
$ip = curl_getinfo($ch, CURLINFO_PRIMARY_IP);
echo PHP_EOL . $http_status . PHP_EOL;
echo "IP: " . $ip . PHP_EOL;
This allowed me to spider sites that the previous 2 had blocked and solved all the false errors from the first Guzzle based spider.
Although not perfect, it still has problems with sites using Cloudflare It was a huge step in the right direction.
Would be interested in hearing from others about how they handle spidering sites protected by Cloudflare. Also ways to do the same when using Guzzle and Laravel Dusk.
]]>I have set up a debian box to play with, just like I have a free debian EC2 instance on AWS :)
Following the steps in the browser, i created a VM and logged in
ssh -i ~/.ssh/key.pem azureuser@1.2.3.4
note it is azureuser but the rest is the same process as setting up on AWS or locally.
I started with a copy of my Ansible-aws repo and duplicated it for the azure server.
So now I have uploaded this as Ansible-Azure-Debian-Lamp and ran it like so:
ansible-playbook main.yml --key-file /home/andy/.ssh/PRIVATEKEY.pem -e 'ansible_python_interpreter=/usr/bin/python3'
There was only 2 errors and having made the following changes to deal with errors, it worked.
FAILED! => {"changed": false, "msg": "No package matching 'python-pymysql' is available"}
solution: remove python-pymysql as it is already installing python3-pymysql
FAILED! => {"changed": false, "msg": "Failed to find required executable gpg in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"}
solution: add gnupg to the packages to install
and it all worked!
I uploaded a laravel app to /var/www/laravel and the ansible scripts setup the apache to work, setting up the laravel app with the usual composer install and chmod the /storage directory. The only issue was with user permissions as the /var/www/ directory is owned by root which i solved with.
sudo chown -R $USER:$USER
and now i have a working laravel app setup with ansible in less than an hour :)
]]>I thought, 'There must be a Laravel way', and after some searching, @themsaid has made Ibis which is lovely, fast, works well and looking at the source code, is fairly simple.
I installed it slightly differently to the instructions installing it in the new ebook directory like so:
composer require themsaid/ibis:* -W
The initialization command to create the repo was this
./vendor/bin/ibis init
It comes with an example 6 markdown pages of one of Mohamed's Books and the command to make the pdf was like so
./vendor/bin/ibis build
This created a a nice clean pdf with minimal fuss. You can use css to style it and it accepts images in the /assets directory.
The one thing i wonder is how would you create an index at the back? But for today, this is a nice new tool and I can focus on writing the markdown.
]]>These are only calling 6 controllers and simple static views, like so:
Route::get('/embassy', ['uses' =>'FeatureController@index'])->where(['feature'=>"amenity=embassy"]);
Route::get("/lidl", ['uses' =>'BrandController@index'])->where(['brand'=>"Lidl"]);
Route::get('/NW9', ['uses' =>'PlaceController@index'])->where(['lat'=>"51.58", 'long'=> "-0.25", 'feature'=> "The Hyde"]);
Route::get('/Somerset_Ilminster', ['uses' =>'PlaceController@index'])->where(['lat'=>'50.926998', 'long'=> '-2.913960', 'feature'=> 'Ilminster_(Somerset)']);
But these make up the majority of them, so now it is time to address this and move them to a database table for the feature, brand and place controllers.
I am planning to use Fallback routes as the last route in the /routes/web.php file to send them all a RouteController which will read the db and send them to the correct router.
Having a quick look at projects online on github, https://github.com/luensys/laravel-database-routes seems to import the routes from the route file into a database, and so does this one https://github.com/douma/laravel-database-routes
wonder if anyone has used these?
]]>composer require nunomaduro/larastan:^2.0 --dev
create a rules file called 'phpstan.neon'
includes:
- ./vendor/nunomaduro/larastan/extension.neon
parameters:
paths:
- app/
# Level 9 is the highest level
level: 0
# ignoreErrors:
# - '#PHPDoc tag @var#'
#
# excludePaths:
# - ./*/*/FileToBeExcluded.php
#
# checkMissingIterableValueType: false
So i started at level 0, which is the lowest level and ran it thus:
./vendor/bin/phpstan analyse
and it passed
level 1 was my first failure
so i added
if (!isset($prefix)) {
abort(404);
}
and its green
how cool :)
level 5 started with more errors
so i added a return type for the views
use Illuminate\Contracts\View\View;
class BrandController extends Controller
{
public function index(): View
There was also redirects which needed
use Illuminate\Http\RedirectResponse;
class NWRController extends Controller
{
public function index(Request $request, $id): View|RedirectResponse
I added a few types in the function arguments and it is all green!
Level 8
this error is caused by this code
14 $route = (\Route::current());
16 $data['brand'] = $route->wheres['brand'];
and I am not sure of the solution to this as (string) doesnt work. Any ideas appreciated...
]]>Today as part of the upgrade process for my laravel site, I wanted to update this Ansible Playbook to use PHP 8.1.
The older versions of laravel are now all out-of-date, see laravel.com/docs/9.x/releases#support-policy so it is time to upgrade to the latest and greatest version of php 8.1. (I know php 8.2 is out, but that isnt mentioned on the laravel php versions, so I will leave that for another day)
The first step was to add the install to roles/php/tasks/main.yml like so:
# install php8.1
- name: install php 8.1 packages
tags: php8.1
apt:
name:
- php8.1
- php-mysql
- libapache2-mod-php
- php8.1-mysql
- php8.1-cli
- php8.1-common
- php8.1-snmp
- php8.1-ldap
- php8.1-curl
- php8.1-mbstring
- php8.1-zip
- php8.1-tidy
- php8.1-opcache
- php8.1-xml
- php8.1-fpm
- libapache2-mod-php8.1
state: present
cache_valid_time: 3600
become: True
I added a tag for php8.1 to the Ansible playbook to allow me to just run this
ansible-playbook -i hosts workstation.yml --ask-become-pass --ask-pass --tags php8.1
Althought this worked, the command line version of php was 8.1, the apache webserver was still 8.0. After hunting around, i found this was required to totally stop the php8.0 version and have 8.1 on the webserver.
sudo systemctl stop php8.0-fpm
sudo systemctl disable php8.0-fpm
sudo a2disconf php8.0-fpm
sudo systemctl start php8.1-fpm
sudo systemctl enable php8.1-fpm
This translates into Ansible playbook as thus:
- name: Stop service php8.0-fpm on debian, if running
tags: php8.1
ansible.builtin.systemd:
name: php8.0-fpm
state: stopped
- name: Enable service php8.0-fpm and ensure it is not masked
tags: php8.1
ansible.builtin.systemd:
name: php8.0-fpm
enabled: no
- name: Disables the Apache2 module php8.0-fpm
tags: php8.1
community.general.apache2_module:
state: absent
name: php8.0-fpm
- name: Make sure php8.1-fpm is running
tags: php8.1
ansible.builtin.systemd:
state: started
name: php8.1-fpm
- name: Enable service php8.1-fpm and ensure it is not masked
tags: php8.1
ansible.builtin.systemd:
name: php8.1-fpm
enabled: yes
Now the apache server and command line are both using PHP 8.1.
This has been pushed to the github ansible playbook repo, hope it helps you.
]]>Although i could have used a website service like Laravel Shift, I wanted to do it by hand to see the differences.
First step is to install an empty version of laravel 9 which gives me 2 versions with the following URLs
I simply copied over the files from the old version to the new repo
and updated the .env file with the db variables and a few settings.
When i loaded the localhost:91/ i got an error that the "Target Class Controller Does Not Exist" this is explained here
laravel.com/docs/8.x/upgrade and here stackoverflow.com/questions/63807930/error-target-class-controller-does-not-exist-when-using-laravel-8
There are a number of solutions but this worked.
Define namespace in RouteServiceProvider as an old version.
App\Providers\RouteServiceProvider
public function boot()
{
$this->configureRateLimiting();
$this->routes(function () {
Route::prefix('api')
->middleware('api')
->namespace($this->namespace)
->namespace('App\Http\Controllers') <------------ Add this
->group(base_path('routes/api.php'));
Route::middleware('web')
->namespace($this->namespace)
->namespace('App\Http\Controllers') <------------- Add this
->group(base_path('routes/web.php'));
});
}
The process was pretty easy and this got the basic site working. In the next blog posts I will explain upgrading the php version and also getting the laravel mix, vite and webpack build process working.
It is a delight to work with Laravel, the latest versions are not very different from each others and I enjoy the stability this framework provides. Big thanks to Taylor Otwell and the team.
]]>These are websites that have images based on the Latitude and Longitude co-ordinates, which is a good start
In turn this got me thinking about adding links to anything using the Lat/Long of the site.
And this is what the links look like on the details pages for any location:
So you can now vist the The Monkey Puzzle Pub page and find images locally.
I wonder what else could be added with links to latitude and longitude, please reply in the comments below.
]]>So I added it, and you can see them at the base of this page.
]]>Planted a green manure on a 10m sqare plot. Never tried before, but just threw the seed across a couple of empty beds and will see how it goes.
Moved a lot of compost onto the big compost heap, made space for the pond i want to make. Covered it all with 6 wheelbarrows of fresh horse.manure. looks nice and big again.
Big harvest of tomatoes, they never stop coming. Cucumbers too.
Finally, the peppers have started to appear, i started them the same time as the tomatoes, but they never really got going.
Things to grow next year: pumpkin patch with corn and beans (3 companion plants)
Things i didnt grow well: peas
Things to plant : leeks, pak choi, spinach
Get grape cuttings from other allotment holders.
]]>Having spent most of the summer allotment gardening and watching private jets at Farnborough Airport, I feel refreshed and am now back to write more blogs.
Planning to write a monthly allotment blog and focus the blog more on the Allotment side of Andy. Also going to write pages for specific plants I like to grow.
Also going to add recipes for a number of the preserves I have made including chutney and tomato relish.
]]>To install I ran:
composer require spatie/browsershot
I also ensured that Puppeteer and chromium-browser were installed, on debian I used.
export PUPPETEER_SKIP_DOWNLOAD='true'
npm install puppeteer --global
apt install -y chromium-browser
The code is fairly straightforward, I added this to my existing spider and it worked.
use Spatie\Browsershot\Browsershot;
try {
Browsershot::url($url)
->timeout(120)
->setNodeBinary('/usr/bin/node')
->setScreenshotType('jpeg', 100)
->save("/images/".$id.".jpg");
} catch (\Exception $e) {
info($e->getMessage());
}
one idea I had is to change the user agent to
->userAgent('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.50 Safari/537.36')
I added it to a guzzle based spider, with a payload of 416 websites. This made 343 screenshots and the guzzle spider only 329! A number of these are 404 errors (not found) that I will fix and run it again, but it was nice to getting it work better than guzzle.
Now there is the issue of the cookie popups and dealing with them....
]]>I want a way to get metrics like this from an API weekly to update the 'scores' for each website, so am contacting people who maybe able to help. If you can offer any advice, please get in touch.
]]>https://allotmentandy.github.io/blog/2022-04-12-Adding-share-buttons-for-twitter-facebook-and-linkedin/
into
https%3A%2F%2Fallotmentandy.github.io%2Fblog%2F2022-04-12-Adding-share-buttons-for-twitter-facebook-and-linkedin%2F
<div class='flex items-center justify-center gap-4'>
<div class="p-4">
<?php
$shareURL = urlencode(url()->current());
?>
Share this page on:
<a href="https://www.facebook.com/sharer/sharer.php?u=<?php echo $shareURL; ?>" target="_blank" class="bg-blue-100 rounded p-4 m-2 border-1 border-black rounded-full hover:bg-indigo-200 hover:text-black">
<img class="inline" width=25 src="/images/facebook.svg"> Facebook</a>
<a href="https://twitter.com/intent/tweet?text=my share text&url=<?php echo $shareURL; ?>" target="_blank" class="bg-blue-100 rounded p-4 m-2 border-1 border-black rounded-full hover:bg-indigo-200 hover:text-black">
<img class="inline" width=25 src="/images/twitter.svg"> Twitter</a>
<a href="http://www.linkedin.com/shareArticle?mini=true&url=<?php echo $shareURL; ?>" target="_blank" class="bg-blue-100 rounded p-4 m-2 border-1 border-black rounded-full hover:bg-indigo-200 hover:text-black" >
<img class="inline" width=25 src="/images/linkedin.svg"> LinkedIn </a>
</div>
</div>
This is what it looks like (adapted slightly for this blog)
I need to publish it to test it out, now, fingers crossed
]]>Share this page on:
<div id="social-links">
<a href="https://www.facebook.com/sharer/sharer.php?u=<?php echo url()->current();?>" id="">
<img class="inline" width=25 src="/images/facebook.svg\"></a>
<a href="https://twitter.com/intent/tweet?text=my share text&url=<?php echo url()->current();?>">
<img class="inline" width=25 src="/images/twitter.svg\"></a>
<a href="http://www.linkedin.com/shareArticle?mini=true&url=<?php echo url()->current();?>&title=my share text&summary=dit is de linkedin summary">
<img class="inline" width=25 src="/images/linkedin.svg"> </a>
</div>
Looking into these links, I discovered that is best to have meta tags to help both facebook and twitter to display cards, otherwise they have to guess what the links are about on the page.
Facebook docs
developers.facebook.com/docs/sharing/webmasters/
Twitter docs
developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
OpenGraph Docs
I also wrote some code to extract the most popular meta tags from a website spider I ran to see the top 50 most common meta tags used. I was surprised at the ammount of sites with the opengraph meta tags.
Metatag | Count |
---|---|
viewport | 47943 |
description | 40094 |
og:title | 27531 |
og:url | 25707 |
og:type | 24986 |
og:description | 23818 |
robots | 23078 |
og:site_name | 23035 |
twitter:card | 21264 |
generator | 17519 |
X-UA-Compatible | 17504 |
og:image | 16992 |
keywords | 15609 |
og:locale | 14901 |
Content-Type | 14035 |
twitter:title | 12400 |
google-site-verification | 11492 |
twitter:description | 10984 |
msapplication-TileImage | 10827 |
article:modified_time | 8901 |
theme-color | 8071 |
twitter:site | 8025 |
og:image:width | 6952 |
twitter:image | 6931 |
og:image:height | 6923 |
author | 5314 |
twitter:label1 | 5175 |
twitter:data1 | 5175 |
msapplication-TileColor | 5172 |
format-detection | 4711 |
content-type | 3302 |
twitter:creator | 3066 |
msvalidate.01 | 2993 |
article:publisher | 2944 |
twitter:url | 2475 |
facebook-domain-verification | 2340 |
og:image:secure_url | 2305 |
x-ua-compatible | 2103 |
og:image:type | 2090 |
apple-mobile-web-app-capable | 2022 |
msapplication-config | 1966 |
title | 1806 |
HandheldFriendly | 1799 |
fb:app_id | 1671 |
application-name | 1666 |
Description | 1651 |
p:domain_verify | 1480 |
MobileOptimized | 1472 |
copyright | 1413 |
revisit-after | 1360 |
apple-mobile-web-app-title | 1314 |
Keywords | 1297 |
So with this in mind, I decided to use the following new meta tags
<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@londiniumcom" />
<meta name="twitter:creator" content="@londiniumcom" />
<meta name="twitter:title" content="title" />
<meta property="og:url" content="<?php echo url()->current();?>" />
<meta property="og:title" content="" />
<meta property="og:type" content="website" />
<meta property="og:description" content="Londinium.com - London maps, directory and information" />
<meta property="og:site_name" content="https://londinium.com" />
<meta property="description" content="Londinium.com - London maps, directory and information" />
I am going to implement this with a new laravel blade partial.
]]>But now it is Monday morning and I like to start the week by updating the data by downloading a new data file from https://overpass-api.de/api/interpreter
I import this into the database and then run a website spider over the links to find more data for each entry and see what has changed.
This is the overpass api query i run each week which covers a large area around London:
data=[out:json];nwr[~"^(brand|website|twitter|facebook|contact:website|contact:twitter|contact:facebook)$"~"."]
(51.11386850819646,-1.197509765625,51.92394344554469,0.85418701171875);out center;
There is a huge fall in the number of entries from last week, approx 55,000 this week, compared to 62,000 last week. I wonder what is causing this difference? I will investigate in a future blog post.
One error I was having with the spider is when a field it too long for the database field. For example, a title
tag is longer than the 255 varchar field the database allows. One of the nice things in Laravel, is the ability to use helpers where the Str::limit helper allows me to truncate the string to the correct length, thus preventing the error.
Another feature of the spider is to determine the HTTP response codes for the website. Here is a table of the most common results and their meanings.
HTTP Code | Count | Meaning |
---|---|---|
200 | 30592 | OK |
404 | 2167 | Not Found |
403 | 739 | Forbidden |
500 | 76 | Internal Server Error |
503 | 75 | Service Unavailable |
409 | 22 | Conflict |
400 | 20 | Bad Request |
410 | 14 | Gone |
And also a handful of the following HTTP codes: 429, 502, 401, 402, 530, 423, 415, 406, 521, 301, 300, 526, 426. These however are only a handful each.
Other errors which I am investigating occur when Guzzle returns
More information about these errors at the Guzzle Docs}
Once this process is finished, the website will be updated with the new database tables and I also plan to update the site with a few more features based on this new spider table.
Enjoy..
]]>So i did a little search online and found this PHP
function that calculates the distance from this point, for each entry in the database.
function distance($lat1, $lon1, $lat2, $lon2, $unit)
{
$theta = $lon1 - $lon2;
$dist = sin(deg2rad($lat1)) * sin(deg2rad($lat2)) + cos(deg2rad($lat1)) * cos(deg2rad($lat2)) * cos(deg2rad($theta));
$dist = acos($dist);
$dist = rad2deg($dist);
$miles = $dist * 60 * 1.1515;
$unit = strtoupper($unit);
if ($unit == "K") {
return ($miles * 1.609344);
} elseif ($unit == "N") {
return ($miles * 0.8684);
} else {
return $miles;
}
}
To get the distance from Charing Cross, it is simply a call to
$distanceFrom = distance(51.509, -0.122, $lat, $lon, 'M');
To store it in the database I added a 'float' field into the mysql table like so:
`distance` float NOT NULL,
And updaed the Laravel Controller to order the result by this field
ORDER BY distance ASC
And hey presto, the lists of websites are now sorted by distance from Charing Cross!
]]>Today I tackled the issue of setting the user location with a cookie so that the features and brand maps are more relevant to the users location.
This was a project for javascripting.
I created a new page setLocation with the following function to set the cookie at the point of click.
function onMapClick(e) {
//map click event object (e) has latlng property which is a location at which the click occured.
popup
.setLatLng(e.latlng)
.setContent("<h2>LOCATION COOKIE SET</h2> <strong>Lat " + e.latlng.lat + "</strong><br><strong>Long " +
e.latlng.lng +
"</strong>")
.openOn(mymap);
lat = "lat=" + e.latlng.lat + '; path = /; sameSite=Lax;';
long = "long=" + e.latlng.lng + '; path = /; sameSite=Lax;';
document.cookie = lat;
document.cookie = long;
}
And some new functionality to use the new cookie variables if set on the features and brand pages.
function getCookie(name) {
// Split cookie string and get all individual name=value pairs in an array
var cookieArr = document.cookie.split(";");
// Loop through the array elements
for (var i = 0; i < cookieArr.length; i++) {
var cookiePair = cookieArr[i].split("=");
/* Removing whitespace at the beginning of the cookie name
and compare it with the given string */
if (name == cookiePair[0].trim()) {
// Decode the cookie value and return
return decodeURIComponent(cookiePair[1]);
}
}
// Return null if not found
return null;
}
window.onload = function () {
var Lat = 51.5
var cookieLat = getCookie("lat");
if (cookieLat != null) {
Lat = getCookie("lat");
}
var Long = -0.144
var cookieLong = getCookie("long");
if (cookieLong != null) {
Long = getCookie("long");
}
map = L.map('map').setView([Lat, Long], 14);
I am sure it could be improved, but it works and allows both a default and user set value.
]]>One of the things it suggested was to create, was a sitemap.xml file for all the links in the site. Using Laravel, there are often a number of tools to complete any job, and I soon found and setup github.com/spatie/laravel-sitemap as a command line tool. However this is a tool that is going to crawl the entire website and with over 2000 routes, is a slow way to build this sitemap.
Google allows both .xml and .txt files according to the docs, see https://developers.google.com/search/docs/advanced/sitemaps/build-sitemap
So I changed tack and went back to using AWK. With this one script, I can extract all the routes and prefix them with the domain name and make the first one of these from the routes/web,php file with awk
awk -F"'" '$1 ~ /^Route::/ {print"http://londinium.com"$2}' routes/web.php > sitemap.txt
The second part is to create a list of the pages from the database. I have a table 'points' where I import OSM data which has a website. so I created an artisan console command which outputs these pages with the command php artisan sitemap:generate > sitemapNWR.txt
. It limits the select to 50000 as that is a limit imposed by Google. The commented out command will offset that query by 50,000.
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use App\Models\Points;
class GenerateSitemap extends Command
{
/**
* The console command name.
*
* @var string
*/
protected $signature = 'sitemap:generate';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Make a sitemap of all the db pages from the points table.';
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
$points = Points::limit(50000)->get();
//$points = Points::skip(50000)->take(50000)->get();
$points->each(function ($point) {
echo "http://londinium.com/". $point->nwr. "/" . $point->id;
echo PHP_EOL;
});
}
}
It is now 2 simple commands, which i can upload to Google Webmaster Search Console.
Another job done!
]]>I wanted to put together a blog post with all the Overpass Queries I have been using in on place.
You can test these out using Overpass Turbo where you can run each example. You may need to adapt the BBOX area for the search and edit them for your purpose.
Get a Feature eg. supermarket
[out:json][timeout:25];
(
nwr[shop=supermarket]({{bbox}});
);
// print results
out body;
>;
out skel qt;
Get everything in a Place
nwr[shop]; nwr[amenity]["name"]; nwr[tourism]["name"]; nwr[sport]["name"]; nwr[building]["name"]; nwr[leisure]["name"]; nwr[public_transport]["name"];nwr[office]["name"];
Get a Brand eg. waitrose
[out:json][timeout:25];
(
nwr["shop"]["name"="Waitrose"]({{bbox}});
nwr["shop"]["brand"="Waitrose"]({{bbox}});
nwr["shop"]["operator"="Waitrose"]({{bbox}});
);
// print results
out body;
>;
out skel qt;
get a single NWR Node / Way / Relation, eg Lords Cricket Ground
[out:json];
relation(9653407);
// print results
out body; >; out skel qt;
London Overground outpus as a csv
[out:csv(::type,::id,network,type,name)][timeout:25];
{{sel=["network"~"London Overground"]["type"~"route"]}}
(
relation{{sel}};
);
out meta qt;
GAY=yes with website
[bbox:{{bbox}}][out:xml][timeout:30];(
nwr[gay=yes][!lgbtq][website];
);out meta;>;out meta qt;
Nodes and Ways with Images
[out:json][timeout:25];
{{geocodeArea:London}}->.searchArea;
(
node["image"](area.searchArea);
way["image"](area.searchArea);
);
out body;
>;
out skel qt;
Communication towers
node
["tower:type"=communication]
["communication:mobile_phone"=yes]
({{bbox}});
out;
A Postcode like query
[out:json][timeout:25];
(
node["addr:postcode"~"^HA8"];
way["addr:postcode"~"^HA8"];
relation["addr:postcode"~"^HA8"];
);
out body;
>;
out skel qt;
AREA NAME
( area[name="Harrow"][admin_level=9]; )->.searchArea;
(
way[highway=cycleway](area.searchArea);
way[highway=path][bicycle=designated](area.searchArea);
way[highway][cycleway]["cycleway"!~"no|opposite"](area.searchArea);
way[highway]["cycleway:left"!=no]["cycleway:left"](area.searchArea);
way[highway]["cycleway:right"!=no]["cycleway:right"](area.searchArea);
);
out geom;
make stats length=sum(length()),section_lengths=set(length());
out;
]]>ufw
firewall is now enabled in a new role and I added each port for apache
I use and also allowed ssh access.
Composer
was complaining that i didnt have the php-curl
module so i installed that too
Laravel Dusk wasnt working so I have added chromium
and had to rerun the php artisan dusk:install to setup the chromedirver for the tests.
However PHP wasnt working perfectly, i needed to enable a2enmod proxy_fcgi
for it to work with .php files.
VS-code didnt like the fact php-codesniffer
wasnt installed so I added it.
Also added a few apps that I like to use including deluge
, krita
and fzf
. fzf
is a fuzzy find for the commandline and I added a alias for lf
to allow me to perform a fuzzy find on the directory with the following code which i added to the end of .bashrc
#bashrc - add lf for fzf andy alias
lf () { ls -lah | fzf -e -q "$@" ;}
The biggest problem I had was trying to get Ansible to setup mysql/mariadb
. The root user needs to have the plugin changed to mysql_native_password
. But I did it by hand, as I didnt get this working with ansible yet!
You can see the repo at github.com/allotmentandy/ansible-debian-install-desktop
]]>npm run production
working on my new machine. The problem was a combination of things:
Solutions I tried:
npm install laravel-mix-purgecss@6.0.0
npm install --save-dev sass-loader@7.1.0
npm install node-sass@v4
npm install --save-dev cross-env
npm install postcss-loader@~3.0.0 --save-dev
npm install -D tailwindcss@npm:@tailwindcss/postcss7-compat @tailwindcss/postcss7-compat postcss@\^7 autoprefixer@\^9 --force
Downgrade Node to version 12, then deleted the entire node_modules directory and reinstalled with the original package.json file.
Then fixed more errors until it worked.
This is now my package.json file
{
"private": true,
"scripts": {
"dev": "npm run development",
"development": "cross-env NODE_ENV=development node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch": "npm run development -- --watch",
"watch-poll": "npm run watch -- --watch-poll",
"hot": "cross-env NODE_ENV=development node_modules/webpack-dev-server/bin/webpack-dev-server.js --inline --hot --disable-host-check --config=node_modules/laravel-mix/setup/webpack.config.js",
"prod": "npm run production",
"production": "cross-env NODE_ENV=production node_modules/webpack/bin/webpack.js --no-progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"devDependencies": {
"@tailwindcss/typography": "^0.4.0",
"axios": "^0.19",
"cross-env": "^7.0.3",
"laravel-mix": "^5.0.1",
"laravel-mix-purgecss": "^5.0.0",
"lodash": "^4.17.13",
"postcss-loader": "~3.0.0",
"resolve-url-loader": "^3.1.0",
"sass": "^1.48.0",
"sass-loader": "^8.0.0",
"vue-template-compiler": "^2.6.14"
},
"dependencies": {
"@tailwindcss/forms": "^0.3.4",
"autoprefixer": "^9.8.8",
"boundingbox": "^0.1.2",
"escape-html": "^1.0.3",
"leaflet": "^1.7.1",
"overpass-frontend": "^2.7.0",
"overpass-layer": "^3.1.0",
"postcss": "^7.0.39",
"tailwindcss": "npm:@tailwindcss/postcss7-compat@^2.2.17",
"yaml": "^1.9.2"
}
}
Once I got this all working, my ansible script then reinstalled v.14 of Node JS :) so now updating the ansible scripts to install v.12.
Most of a day of experimenting with the behemoth that is the world of Node JS :)
I ran into this same problem again with another site. I added webpack 4 to the dependencies
"webpack": "^4.00.0"
and did a hard reinstall of all the node modules!
rm -rf node_modules
rm package-lock.json
npm cache clear --force
npm install
]]>Laravel has a Browser testing environment called Dusk
It uses the chrome browser to test a website as a human would use it.
It can also make screenshots at different sizes to mirror different size screens.
Having moved to a new machine, copied over the code base, the dusk tests were failing. To fix it i needed to run
apt-get install chromium
php artisan dusk:install
I have added these to the ansible setup which I will blog about in the future.
Whilst writing the tests, i wanted a way to just run the single test rather than the entire suite. This calls just the one test:
php artisan dusk tests/Browser/blog404Test.php
What i wanted to test for was that a page isnt found and returns a 404 error page. Dusk cant do that directly, so the dusk test looks for '404' to appear in the page. But dusk tests can also run vanilla phpunit tests which can assert the status is 404.
$this->browse(function (Browser $browser) {
$browser->visit('https://allotmentandy.github.io/missingPage/')
->assertSee('404');
$response = $this->get('https://allotmentandy.github.io/missingPage/')
->assertStatus(404);
}
I also wanted to test that there was no errors in the console;
$consoleLog = ($browser->driver->manage()->getLog('browser'));
$this->assertEqualsCanonicalizing($consoleLog, []);
This creates an Array of the console output and asserts it is a blank array. Simples!
]]>Here is the link to the repo
github.com/allotmentandy/ansible-debian-install-desktop
Inspired by the following repos on github, I wanted to install as much as possible with Ansible, including settings for software, the XFCE panels and copy over the files I want as a backup.
https://github.com/liquuid/ansible-desktop-debian
https://github.com/geerlingguy/mac-dev-playbook
https://github.com/alecigne/ansible-desktop
I have used installed a lot of software but the core apps I wanted is as follows:
a number of extra repositories for software was required and features in the roles within the repository.
To run the ansible playbook run this command and enter the ssh password and the root password like so:
ansible-playbook -i hosts workstation.yml --ask-become-pass --ask-pass
SSH password:
BECOME password[defaults to SSH password]:
One of the issues was that I tend to use su instead of the default sudo, and this requires become_method: su
which is set in the ansible_cfg
file
I will extend this as I need to, and update the blog and the repo. Hope it is of some help. If you have any feedback, issues or suggestions, please contact me.
]]>Using the https://github.com/plepe/overpass-frontend I used the by ID function to display the a single Node, Way or Relation using the format n123, w123 and r123 respectively to make the overpass api query.
This is the code in the github readme:
const OverpassFrontend = require('overpass-frontend')
// you may specify an OSM file as url, e.g. 'test/data.osm.bz2'
const overpassFrontend = new OverpassFrontend('//overpass-api.de/api/interpreter')
// request restaurants in the specified bounding box
overpassFrontend.get(
['n27365030', 'w5013364'],
{
properties: OverpassFrontend.TAGS
},
function (err, result) {
if (result) {
console.log('* ' + result.tags.name + ' (' + result.id + ')')
} else {
console.log('* empty result')
}
},
function (err) {
if (err) { console.log(err) }
}
)
Here is an example of one of each of the 3 types:
Each page displays a smaller map with a list of the data stored, which is returned from the overpass API call.
It took me a bit of fiddling to get the data to display but it works!
One thing I also managed to solve was the ability to add a loading data...
block to the header which i have implemented on every map page to show the user that the data is loading. It isnt as quick I would like, but at least it provides the user with feedback.
These are pages that have 9 entries or more of the following tags
One issue i ran into straight away was brands with apostrophes in them. Eg Nando's, Mcdonald's, Sainsbury's, Papa John's and more.
Using this guide to Overpass API regex, https://wiki.openstreetmap.org/wiki/Overpass_API/Overpass_QL
My solution for now is to only match the start of the term, so I use Sainsbury, Nando etc. with the following regex overpass api query:
(nwr["name"~"^{{$brand}}"]; nwr["brand"~"^{{$brand}}"]; nwr["operator"~"^{{$brand}}"]; );
This method also allows the query to match Tesco
, Tesco Metro
, Tesco Express
all from the Tesco
query.
To investigate all the brands, names and operators, i once again created a database import script to create a mysql table for brands and extracted the fields from the osm json.
To find the most common brands I used this sql query:
SELECT LOWER(brand) , brand, COUNT(brand) as cnt
FROM brands
GROUP BY brand
HAVING cnt > 9
This provided the following with at least 10 entries for my London area
Asda 210
Domino's 184
Boots 144
Lloyds Bank 138
Tesco Express 95
Wetherspoon 90
Pret A Manger 65
Premier Inn 54
Costa 53
Sainsbury's Local 50
Harvester 49
Sainsbury's 47
Starbucks 46
Hamptons International 43
Co-op Food 42
HSBC UK 42
Primark 41
Subway 39
Argos 38
McDonald's 36
Caffè Nero 34
Nando's 33
PizzaExpress 31
Bupa 29
Waitrose 28
Tesco 27
Holland & Barrett 25
Morrisons 24
Bairstow Eves 24
M&S Simply Food 22
Tesco Extra 22
Toby Carvery 22
Mail Boxes Etc. 22
Iceland 21
Marks & Spencer 21
KFC 20
Post Office 19
Esso 19
Boots Opticians 19
WHSmith 18
Toni & Guy 18
Nationwide 18
CTD Tiles 17
CeX 17
Better 17
Oxfam 17
Peacocks 17
Specsavers 17
Travelodge 17
Greggs 16
itsu 16
NatWest 16
Cotswold Outdoor 16
Zizzi 16
Papa John's 15
ALDI 15
Cancer Research UK 15
Screwfix 15
Hall & Woodhouse 15
Pizza Hut 15
Hilton 14
Wagamama 14
Spar 14
Superdrug 14
PureGym 14
Pets at Home 14
Furniture Village 14
Honest Burgers 14
Snappy Snaps 13
Five Guys 13
Krispy Kreme 13
Lidl 12
Ryman 12
Decathlon 12
3 Store 12
Nisa Local 12
Joe & The Juice 12
Waterstones 12
Côte Brasserie 12
Burger King 11
Office 11
Santander 11
Little Waitrose 11
Wickes 11
Kwik Fit 11
John Lewis 11
EE 11
Paul 11
Prezzo 10
TK Maxx 10
Bill's 10
Lush 10
Coral 10
Shell 10
The Gym 10
Sports Direct 10
Wilko 10
New Look 10
Game 10
Costco 10
William Hill 10
Knight Frank 10
Next 10
Farmfoods 10
Halifax 10
These are all linked from the frontpage on londinium.com
Many of these chains dont have the fields for every branch, which can be fixed by adding the correct brand in the OpenStreetMap system. I can do this for you. get in touch if you would like help improving the data stored in this open source map system.
]]>php artisan dusk:install
to run the 1st test
php artisan dusk
First I am going to create a test for the frontpage to load and then resize to the 4 main sizes which tailwind css uses tailwindcss.com/docs/responsive-design
This is the comprenhsive manual for laravel dusk laravel.com/docs/8.x/dusk
$browser->visit('http://www.londinium.com/')
->waitForText('Londinium');
->assertSee('Londinium');
$browser->resize(1920, 1080);
$browser->screenshot('home-1920');
$browser->resize(1280, 720);
$browser->screenshot('home-1280');
$browser->resize(768, 720);
$browser->screenshot('home-768');
$browser->resize(320, 720);
$browser->screenshot('home-320');
]]>I am just about to upload the latest version of my londinium.com website
Adding more postcodes to the frontpage map to fill the screen
Adding a list of nearest websites to the places pages using a database extracted from the Overpass API.
Combining all the javascript using webpack and compressing it. Also updating it to the latest versions.
Improving the map popup css using a larger font.
Tweeking the favicon.ico
Minor CSS improvements adding alternate odd even rows to the table of nearest links.
Using a mirror for the Overpass API data and adding a CORS header to allow it.
Fixing a few duplicate routes
Note: i used git log to see which files had changed in the last week:
git log --pretty=format: --name-only --since="7 days ago"
And I deployed on a friday!
]]>I want 5 tabs with the following
I am using xdotool and have to press enter
at the end of each type command. I also place a space before each command so it doesnt get stored in the command history.
The root one on the end, doesnt login as root, rather leaves the command ready to be run.
I like to have the terminal with tab titles, it is a pain to do it by hand. Also embedding the gist from github which is a better place to keep the shell scripts.
Hope you enjoy it.
Andy
]]>I am about to upload a new feature for the area pages, where the page shows a table of results with websites nearest the centre point of the map. Perhaps this could be created dynamically from the live overpass data.
This map uses OpenLayers, but displays the different sort of features with different colours.
Create a page to set the location. Here are a few links to examples of geolocation using html5
https://www.w3schools.com/html/tryit.asp?filename=tryhtml5_geolocation
https://www.tutorialspoint.com/How-to-use-HTML5-Geolocation-Latitude-Longitude-API
This is the Overpass API Query to get all Waitroses in the BBOX area.
[out:json][timeout:25];
(
// query part for: “shop=* and name=Waitrose
nwr["shop"]["name"="Waitrose"]({{bbox}});
// query part for: “shop=* and brand=Waitrose
nwr["shop"]["brand"="Waitrose"]({{bbox}});
// query part for: “shop=* and operator=Waitrose
nwr["shop"]["operator"="Waitrose"]({{bbox}});
);
// print results
out body;
>;
out skel qt;
This is just a vague idea, but at present, a url field will be linked, but there is a large number of urls without the http/https element, which makes it an internal link.
]]>I am looking for help adding a loading message as the overpass API data takes a good few seconds to load. Basically, I am looking to add a box to the map that says loading data, refering to the load of the results from the /api/interpreter which you can see in the browser dev tools.
eg
Loading data ......
26kb received
OR
Error - 426 - Server too busy - please try again
I am not sure if at the start of the page load, there is an idea to how much data will come in, The data seems to come in 'waves', but also errors if there is too many requests, I want to be able to display this as a layer in the map, and for it to reappear when the page is moved.
One of the replies pointed me to this list of mirrors of the data https://wiki.openstreetmap.org/wiki/Overpass_API#Public_Overpass_API_instances
Trying out the api from https://kumi.systems/ I got a CORS error, which complains about a Cross-Origin Request, so i added this php at the start to fix that.
<?php
header('Access-Control-Allow-Origin: https://overpass.kumi.systems/');
?>
Have been in contact with the team at Kumi Systems and hope to find a solution to the loading issue and see if there API mirror is any faster.
Another reply suggested hosting your own overpass api, which is installed like so, https://wiki.openstreetmap.org/wiki/Overpass_API/Installation I may look at this solution in the future....
]]>out
with out center
so i my query for extracting websites is now put in a query file called query.osm
like so:
data=[out:json];nwr[~"^(website|twitter|facebook|contact:website|contact:twitter|contact:facebook)$"~"."] (51.11386850819646,-1.197509765625,51.92394344554469,0.85418701171875);out center;
Note that the export area is encoded in the query, and the output format is json.
To download the osm data with this command using the 'query.osm' file above run:
wget -O osm.json --post-file=query.osm "https://overpass-api.de/api/interpreter"
To parse the osm.json
file to extract data and create an insert sql file, I amended my parseXml.php file like so
I am now donwloading json, so i needed to change the parse file a lot. It is much cleaner and easier with json to extract the data
<?php
error_reporting(E_ALL ^ E_WARNING); // disable warnings as this is an sql import
$json = file_get_contents('osm.json');
$phpDataArray = json_decode($json, false );
foreach($phpDataArray as $value){
foreach($value as $row){
if ($row->tags->website){ $website = $row->tags->website;}
else $website = $row->tags->{'contact:website'};
// if the website is not valid or blank - continue, ignoring this row
if (filter_var($website, FILTER_VALIDATE_URL) === false) {
continue;
}
if ($row->id){
$nwr = $row->type; // Node, Way, Relation
if ($row->lat){
$lat = $row->lat;
}
else
$lat = $row->center->lat;
if ($row->lon){
$lon = $row->lon;
}
else
$lon = $row->center->lon;
$address = $row->tags->{'addr:housenumber'} . " " . $row->tags->{'addr:housename'} . " " .
$row->tags->{'addr:unit'} . " " . $row->tags->{'addr:street'} . " " .
$row->tags->{'addr:place'} . " " . $row->tags->{'addr:city'} . " " .
$row->tags->{'addr:postcode'};
$feature = "";
if($row->tags->railway){ $feature = $row->tags->railway;}
if($row->tags->public_transport){ $feature = $row->tags->public_transport;}
if($row->tags->highway){ $feature = $row->tags->historic;}
if($row->tags->building && ($row->tags->building != "yes") ){ $feature = $row->tags->building;}
if($row->tags->natural){ $feature = $row->tags->natural;}
if($row->tags->sport){ $feature = $row->tags->sport;}
if($row->tags->leisure){ $feature = $row->tags->leisure;}
if($row->tags->landuse){ $feature = $row->tags->landuse;}
if($row->tags->craft){ $feature = $row->tags->craft;}
if($row->tags->office){ $feature = $row->tags->office;}
if($row->tags->tourism){ $feature = $row->tags->tourism;}
if($row->tags->shop){ $feature = $row->tags->shop;}
if($row->tags->amenity){ $feature = $row->tags->amenity;}
$amenity = "";
if($row->tags->amenity){ $amenity = $row->tags->amenity;}
$building = "";
if($row->tags->building && ($row->tags->building != "yes") ){ $building = $row->tags->building;}
$tourism = "";
if($row->tags->tourism){ $tourism = $row->tags->tourism;}
$craft = "";
if($row->tags->craft){ $craft = $row->tags->craft;}
//social
if ($row->tags->twitter){ $twitter = $row->tags->twitter;}
else $twitter = $row->tags->{'contact:twitter'};
if ($row->tags->facebook){ $facebook = $row->tags->facebook;}
else $facebook = $row->tags->{'contact:facebook'};
if ($row->tags->instagram){ $instagram = $row->tags->instagram;}
else $instagram = $row->tags->{'contact:instagram'};
if ($row->tags->linkedin){ $linkedin = $row->tags->linkedin;}
else $linkedin = $row->tags->{'contact:linkedin'};
// make the sql using INSERT IGNORE to prevent errors
echo "INSERT IGNORE INTO `points` (`id`, `website`, `address`, `feature` ,`amenity` ,`building` ,
`tourism` ,`craft` , `twitter`, `facebook`, `linkedin`, `instagram`, `name`, `nwr`, `point`) ";
echo "VALUES(" . $row->id .", '" . addslashes($website) . "', '". addslashes(trim($address)) . "', '".
addslashes($feature) . "', '" .addslashes($amenity) . "', '" .addslashes($building) . "', '" .
addslashes($tourism) . "', '" .addslashes($craft) . "', '" . $twitter . "', '". $facebook
. "', '". $linkedin . "', '". $instagram . "', '" . addslashes($row->tags->name). "', '".
$nwr . "', POINT(" . $lat . " , " . $lon ." ));";
echo PHP_EOL;
}
}
}
To create the sql file, I ran it like this
parseJson.php > points.sql
I had to extract the nwr
(Node/Way/Relation) to be able to link to the live db database on openstreetmap. I also renamed the field type
as feature
so the new database table is like so:
CREATE TABLE `points` (
`id` varchar(11) NOT NULL,
`nwr` varchar(255) NOT NULL,
`feature` varchar(255) NOT NULL,
`amenity` varchar(255) NOT NULL,
`building` varchar(255) NOT NULL,
`tourism` varchar(255) NOT NULL,
`craft` varchar(255) NOT NULL,
`name` varchar(255) NOT NULL,
`address` varchar(255) NOT NULL,
`website` varchar(255) NOT NULL,
`twitter` varchar(255) NOT NULL,
`facebook` varchar(255) NOT NULL,
`linkedin` varchar(255) NOT NULL,
`instagram` varchar(255) NOT NULL,
`point` point NOT NULL,
PRIMARY KEY (`id`),
SPATIAL KEY `point` (`point`)
) ENGINE=InnoDB;
import into db using pipe viewer to see progress
pv points.sql | mysql -u root -p londinium
And hey presto, a table full of data with each entity having geographic data.
There is still a number of 'dodgy' entries, which i plan to clean up at the next stage of this this import.
]]>Here are a few links of use.
hashicorp terraform/associate-study
bogotobogo.com Terraform_commands_cheat_sheet
Good tutorials here - one to do is upload a static site to s3 bogotobogo.com
I also enjoyed this 13 hour video by Andrew Brown who goes through most of the aspects of the certification test.
Here are some practice questions
bb-tutorials-and-thoughts/250-practice-questions-for-terraform-associate-certification-7a3ccebe6a1a
]]>What are you trying to achieve? Do you want to tell people about your script? Are you looking for co-developers? Are you looking for ideas?
Over 20 years ago, i register the domain name Londinium.com and spent many years building a large directory website focused on London. Around 2015, Google kicked the directories like mine from their search engine results, alongside regular Denial of Service attacks, the EU cookie directive and other factors, I decided to take the website down as it was no longer making any money.
Now I have relaunched londinium.com with the maps from Open Street Map and using the Overpass API, i hope to be able to provide a map based solution for people looking for local information and more.
One idea I have is to contribute all the data to Openstreetmap. I have developed a number of web spiders, crawlers and tools to find website information. This includes checking websites work, extracting links from websites, content from web pages and screenshots of websites.
I would like to make use of these tools to clean the data returned from the overpass api, (eg. fixing links to websites that are missing the http:// part) I would also like to use the tools I have written to add more links to social media like twitter, facebook and also to youtube, flickr and more. My tools also extract data from the Head of the websites like ICBM location and other Meta tag data.
As a PHP developer with a geography background, I mainly work with the Laravel framework. Much of the OSM technology is Javascript based, and although I have coded londinium.com so far, javascript is not my strongest skill. I would like to find other people that use similar technologies (leaflet js
, overpass api
) willing to share and help each other.
I would also like to open source the technology once it has reached a stable state. Not only the frontend website, but the backend tools that find, check and update information held.
One thing I would like to add is a "Loading..." modal to say that the Overpass API data is loading on the live site. Not entirely sure how to do this.
Another question with regards to developers, is where do they hangout? Is the forum the correct place to paste the link to this post, or are the mailing lists better. Is there a chatroom or irc channel in which openstreetmap folk dwell?
I may well update this blog post if people ask questions or other points occur to me.
Happy Christmas
Andy
]]>$housenumber . " " . $street . " " . $city . " " . $postcode;
and also a type, which is a combination of sport, leisure, shop, amenity, office
which gives an idea of what the object is.
I have added fields into the database table to store these like so:
CREATE TABLE `points` (
`id` varchar(11) NOT NULL,
`website` varchar(255) NOT NULL,
`address` varchar(255) NOT NULL,
`type` varchar(255) NOT NULL,
`twitter` varchar(255) NOT NULL,
`name` varchar(255) NOT NULL,
`point` point NOT NULL,
PRIMARY KEY (`id`),
SPATIAL KEY `point` (`point`)
) ENGINE=InnoDB;
so now the parseXml.php file looks like this
parseXml.php
<?php
$xmlDataString = file_get_contents('osmData.json');
$xmlObject = simplexml_load_string($xmlDataString);
$json = json_encode($xmlObject);
$phpDataArray = json_decode($json, false );
foreach($phpDataArray as $value){
foreach($value as $row){
$name = "";
$twitter = "";
$housenumber = "";
$street = "";
$city = "";
$postcode = "";
$type = "";
foreach($row->{'tag'} as $attributes){
if($attributes->{'@attributes'}->{'k'} == 'name'){
$name = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'contact:twitter'){
$twitter = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'addr:housenumber'){
$housenumber = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'addr:street'){
$street = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'addr:city'){
$city = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'addr:postcode'){
$postcode = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'sport'){
$type = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'leisure'){
$type = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'shop'){
$type = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'amenity'){
$type = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'office'){
$type = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'website'){
$address = $housenumber . " " . $street . " " . $city . " " . $postcode;
$address = trim($address);
// make the sql using INSERT IGNORE to prevent errors
echo "INSERT IGNORE INTO `points` (`id`, `website`, `address`, `type` , `twitter`, `name`, `point`) ";
echo "VALUES(" . $row->{'@attributes'}->id .", '" . addslashes($attributes->{'@attributes'}->{'v'}) . "', '". addslashes($address) . "', '". addslashes($type) . "', '". $twitter . "', '" . addslashes($name) . "', POINT(" . $row->{'@attributes'}->lat . " , " . $row->{'@attributes'}->lon ." ));";
echo PHP_EOL;
}
}
}
}
running it using the same 2 commands, the first processes the xml file and outputs sql inserts, the second inserts it into the db.
php parseXml.php > points1.sql
mysql -u root -p dbtable < points1.sql
There are a few other fields I was looking at, including facebook, contact:facebook, contact:instagram, contact:linkedin, linkedin, addr:country, email, phone, contact:phone
but these dont actually hold much data. So I am going to focus on displaying the results on the location pages to list the nearby entries.
NB: The type
field in the code above only takes the value of the last field that exists so office is more important than amenity etc... I did notice that some have multiple values separated by semi-colons (eg. spinning;fitness;yoga) I will check to see if these work in the overpass api results.
I am also planning to run a web spider/crawler on these results to see which pages are working (non 404 results) and also extract more of the external social media links using my Social Media Link Extractor package but that is for a future post. I would be interested in adding this data to the overpass api automatically, but not sure that is allowed.
]]>Going to use the new(ish) point datatype in mysql.
CREATE TABLE `points` (
`id` varchar(11) NOT NULL,
`website` varchar(255) NOT NULL,
`twitter` varchar(255) NOT NULL,
`name` varchar(255) NOT NULL,
`point` point NOT NULL,
PRIMARY KEY (`id`),
SPATIAL KEY `point` (`point`)
) ENGINE=InnoDB;
I amended the php script in yesterdays post to output sql.
<?php
$xmlDataString = file_get_contents('osmData.xml');
$xmlObject = simplexml_load_string($xmlDataString);
$json = json_encode($xmlObject);
$phpDataArray = json_decode($json, false );
foreach($phpDataArray as $value){
foreach($value as $row){
$name = "";
$twitter = "";
foreach($row->{'tag'} as $attributes){
if($attributes->{'@attributes'}->{'k'} == 'name'){
$name = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'contact:twitter'){
$twitter = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'website'){
echo "INSERT INTO `points` (`id`, `website`, `twitter`, `name`, `point`) ";
echo "VALUES(" . $row->{'@attributes'}->id .", '" . $attributes->{'@attributes'}->{'v'} . "', '". $twitter . "', '" . addslashes($name) . "', POINT(" . $row->{'@attributes'}->lat . " , " . $row->{'@attributes'}->lon ." ));";
echo PHP_EOL;
}
}
}
}
To save the sql i ran
php xmlParser.php > points.sql
To import it to the database i run
mysql -u root -p dbName < points.sql
There was a couple of problems due to 2 urls having ' in them! Neither actually worked (404 errors) so i manually deleted them from the import table, deleted the table contents and started again. Third time lucky, i had a new db table with over 10,000 records.
Now i can run a query like this:
SELECT * FROM points WHERE ST_Distance_Sphere(point, POINT(51.49591970845512, -0.26298522949218756)) <= 2000 ;
which returns 100 websites within 2500 metres of the point (in chiswick)! This works for all the points i tried including much shorter coordinates.
SELECT * FROM points WHERE ST_Distance_Sphere(point, POINT(51.49, -0.14)) <= 2000 ;
Now I wanted to improve it further and show and order by the distance from the centre point
SELECT id, name, website,
ST_X(point) AS longitude,
ST_Y(point) AS latitude,
ST_DISTANCE_SPHERE(point, POINT(51.49455119685909, -0.14415264129638675)) AS dist
FROM points
HAVING dist < 500
ORDER BY dist;
Now i am going to improve the import parser to add the addresses to make it easier to see problems.
Changing the INSERT
command to INSERT IGNORE
to ignore errors
My plan is to import this data to a database so I wrote a php script to convert this data from XML to Json and then output the data in the terminal.
<?php
$xmlDataString = file_get_contents('osmData.xml');
$xmlObject = simplexml_load_string($xmlDataString);
$json = json_encode($xmlObject);
$phpDataArray = json_decode($json, false );
$counter =0;
foreach($phpDataArray as $value){
foreach($value as $row){
$name = "";
$twitter = "";
foreach($row->{'tag'} as $attributes){
if($attributes->{'@attributes'}->{'k'} == 'name'){
$name = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'contact:twitter'){
$twitter = ($attributes->{'@attributes'}->{'v'});
}
if($attributes->{'@attributes'}->{'k'} == 'website'){
echo ($row->{'@attributes'}->id );
echo " ";
echo ($row->{'@attributes'}->lat );
echo " ";
echo ($row->{'@attributes'}->lon );
echo " ";
echo ("\e[1;32;40m" . $attributes->{'@attributes'}->{'v'} ."\e[0m");
echo " ";
echo "\e[1;33;40m" . $name ."\e[0m";
echo " ";
echo "\e[0;34;40m" . $twitter ."\e[0m";
echo PHP_EOL;
$counter++;
}
}
}
}
echo "total records with website: " . $counter . PHP_EOL;
Here is a sample of the output which shows entries that have a website url.
The green values are the website values, the yellow the names and the blue ones the twitter fields. Some of the websites arnt websites, many dont have the http/https prefix. Some of the entries dont have names. The twitter links arnt in any standard format, full url, @twitterhandle or just the twitter handle.
I will develop this script further to extract more info from the xml and work out how and what to store in the database. Stay tuned.
]]>The frontpage of londinium.com now has a number of different styles of maps with links to airports and main stations alongside postcodes. You can access these in the top right corner menu. I have added all the mapbox ones, but there are a lot more.
This is on top of showing the Openstreetmap tags / map features (eg, supermarkets)
The features are extracted using overpass-api which is an frontend for the data held in the OSM system.
It is built using laravel which is the php framework i have used most over the past few years
The maps use leaflet js. You can see the range of base maps available on http://leaflet-extras.github.io/leaflet-providers/preview/index.html
Here are a number of links i found useful in developing the website.
github.com/plepe/overpass-layer
A simple change of line 67 of the setup.tf file from 50gb to 29gb and then by running 2 commands, quickly, an instance is back up with a smaller disk!
terraform destroy
terraform apply
This is devops! Fast, accurate automation to solve problems and get up and running accurately.
On my journey learning and using the AWS Cloud, I have been looking at more of the offerings from Amazon.
Useful reading:
]]>The ansible code is in Part 7 of the repo github.com/allotmentandy/aws, this code creates a NEW instance on EC2.
To run the playbook:
ansible-playbook -i ./hosts --private-key ~/.ssh/private.pem createEC2.yml
Installing tailwind into the laraveldocker app i made in Part 5 following this tutorial https://tailwindcss.com/docs/guides/laravel
npm install
npm install -D tailwindcss@latest postcss@latest autoprefixer@latest
npx tailwindcss init
setting up the tailwind config and the webpack config to allow
npm run dev
adding the css to the html in /resources/views/welcome.blade.php and hey presto tailwind css is working.
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="{{ asset('css/app.css') }}" rel="stylesheet">
I also added https://github.com/laravel/ui vue js using that link
]]>Following this tutorial https://buddy.works/guides/laravel-in-docker
I started with a fresh install of Laravel 8 which is in Part 5 of the repo github.com/allotmentandy/aws with this Dockerfile
FROM php:8.0.5
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8888
EXPOSE 8888
docker build -t laravelDocker .
docker run -p 8888:8888 laravelDocker
The first command shows all the containers, the second stops the container with the id
docker container ls
docker container stop ba0ce2c7575f
visiting http://localhost:8888/ see the laravel frontpage
I am going to set it up on an EC2 server instance, but that doesnt seem the norm with Lightsail, Lambda or ECS seemingly better fits for hosting a container image.
So i have the Debian Instance i setup in part 4 with Terraform, now i am going to manually login via ssh, and setup docker
sudo apt-get update
sudo apt install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce
Now i need to add my docker container to docker hub
docker ps # to see the docker images eg. id = 5a6a73bcea26
# docker commit -m "first commit" -a "NAME" laraveldocker allotmentandy/laraveldocker:latest # didnt work :(
docker login
docker tag 5a6a73bcea26 allotmentandy/laraveldocker
docker push allotmentandy/laraveldocker
Et voila! dockerhub / allotmentandy
now i ssh into the EC2 instance
docker pull allotmentandy/laraveldocker
sudo docker run -p 80:80 allotmentandy/laraveldocker
updating the laravel files / dockerfile means a rebuild:
docker build -t allotmentandy/laraveldocker -f Dockerfile .
docker push allotmentandy/laraveldocker
stopping the container, redoing the pull and running it again. Simples :)
]]>Todays blog post is about setting up the instance of the server with Terraform.
It also uses a fixed IP address, which uses the Elastic IP service from Amazon.
variable "awsprops" {
type = map(string)
default = {
region = "eu-west-2"
vpc = "vpc-04a899695f093e273"
ami = "ami-050949f5d3aede071"
itype = "t2.micro"
subnet = "subnet-071b970b97329866c"
publicip = true
keyname = "amazon nov 2021"
secgroupname = "IAC-Sec-Group-Terrform"
}
}
provider "aws" {
region = lookup(var.awsprops, "region")
}
resource "aws_security_group" "project-iac-sg" {
name = lookup(var.awsprops, "secgroupname")
description = lookup(var.awsprops, "secgroupname")
vpc_id = lookup(var.awsprops, "vpc")
// To Allow SSH Transport
ingress {
from_port = 22
protocol = "tcp"
to_port = 22
cidr_blocks = ["0.0.0.0/0"]
}
// To Allow Port 80 Transport
ingress {
from_port = 80
protocol = "tcp"
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_instance" "project-iac" {
ami = lookup(var.awsprops, "ami")
instance_type = lookup(var.awsprops, "itype")
subnet_id = lookup(var.awsprops, "subnet") #FFXsubnet2
associate_public_ip_address = lookup(var.awsprops, "publicip")
key_name = lookup(var.awsprops, "keyname")
vpc_security_group_ids = [
aws_security_group.project-iac-sg.id
]
root_block_device {
delete_on_termination = true
iops = 150
volume_size = 50
volume_type = "gp3"
}
tags = {
Name = "SERVER01"
Environment = "DEV"
OS = "DEBIAN"
Managed = "IAC"
}
depends_on = [aws_security_group.project-iac-sg]
}
data "aws_eip" "project-iac" {
id = "eipalloc-07a144e8268e6616b"
}
resource "aws_eip_association" "my_eip_association" {
instance_id = aws_instance.project-iac.id
allocation_id = data.aws_eip.project-iac.id
}
output "ec2instance" {
value = aws_instance.project-iac.public_ip
}
Please visit this repo at github.com/allotmentandy/aws to see the terraform code in the directory part 4. The code is in the setup.tf file and the get it to build the instance you run the following 3 commands.
terraform init
terraform plan
terraform apply
The ip address is allocated to my account and it id = "eipalloc-07a144e8268e6616b"
The keyname is the name in the amazon system, not the local file.
The aws credentials are setup using awscli at the command line and stored in .aws/credentials
In my opinion, Ansible is a better way to set this up. It uses a set of files locally to login to and setup the server in a repeatable way.
Once again I am going to use the Debian 10 AMI - ami-050949f5d3aede071 setup with a t2-micro instance on the free tier.
Please visit this repo at github.com/allotmentandy/aws for the Ansible files to set this up.
I based this on github.com/kdpuvvadi/ansible-lamp repo as it worked well and is nicely structured.
Important files to setup
To run this code use this command where the key is set in the command
ansible-playbook main.yml --key-file /home/andy/.ssh/aws2021.pem -e 'ansible_python_interpreter=/usr/bin/python3'
visit the ip address to see the apache test page. /info.php to see the php test page. and login to the mysql server to test the database connection.
]]>Installing mysql isnt as simple as the rest of the LAMP stack. It is necessary to run the mysql_secure_installation command to setup the mysql password and the Update command to allowed login from the website for apps like Adminer. this article is a good explaination to why.
# install mysql
sudo apt-get install -y default-mysql-server
sudo apt-get install -y php8.0-mysql php8.1-mysql
#set the password and setup mysql
sudo mysql_secure_installation
# questions asked
# Enter current password for root (enter for none):
# Set root password? [Y/n] Y
# Remove anonymous users? [Y/n] Y
# Disallow root login remotely? [Y/n] Y
# Remove test database and access to it? [Y/n] Y
# Reload privilege tables now? [Y/n] Y
#test connection with the new password
sudo mysql -u root -p
# to allow login for adminer login to mysql and run
# UPDATE user SET plugin='mysql_native_password' WHERE User='root';
# install adminer
sudo wget "http://www.adminer.org/latest.php" -O /var/www/html/adminer.php
sudo chown -R root:root /var/www/html/adminer.php
sudo chmod 755 /var/www/html/adminer.php
sudo /etc/init.d/mysql restart
This allows the user to login from the website using adminer and administer the DB tables.
Please visit this repo at github.com/allotmentandy/aws for shell scripts and files to run this code.
]]>I my quest to learn more about cloud computing especially using PHP, I am spending the cold winter months developing skills. I am going to write blogs about investigating each of these
Setting up an instance is pretty easy - i created a private key, downloaded it and connected the first time using
ssh -i privatekey.pem admin@ec2-13-40-50-116.eu-west-2.compute.amazonaws.com
Note the user is admin, as that is the default for debian
Here is a link to the default usernames for different systems
aws default users for ssh connections to linux distros
To run apt-get you need to use sudo
So to install apache
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install apache2
sudo a2enmod ssl
sudo a2enmod rewrite
sudo /etc/init.d/apache2 restart
# install firewall and setup
sudo apt install ufw
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
# to get the https version running
sudo apt install certbot
sudo apt-get install python-certbot-apache
sudo a2ensite default-ssl
sudo systemctl reload apache2
# installing php 8.0 and 8.1
sudo apt-get install lsb-release apt-transport-https ca-certificates
sudo wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
sudo echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list
sudo apt-get update
sudo apt install -y php8.0-{mysql,cli,common,snmp,ldap,curl,mbstring,zip,tidy,xml,opcache}
sudo apt install -y php8.1-{mysql,cli,common,snmp,ldap,curl,mbstring,zip,tidy,xml,opcache}
sudo update-alternatives --config php
sudo apt-get install php8.0-fpm libapache2-mod-php8.0
sudo apt-get install php8.1-fpm libapache2-mod-php8.1
sudo a2dismod php8.0
sudo a2enmod php8.1
sudo systemctl restart apache2
The default directory for the files is /var/www/html Setting up a simple test.php file with the contents
<?php
phpinfo();
Gives the php info test page.
Next, lets install a mysql database to store some data.
Please visit this repo at github.com/allotmentandy/aws for shell scripts and files to run this code.
]]>Some example jobs I would be good at
This site and a number of others I have been working on use Tailwind CSS and I would like to find a role to teleport an old fashioned website into the future.
I enjoy writing on a number of topics including technology, gardening, cooking and aviation. Perhaps you need some articles written about something
Another skill I have is writing scripts (bash, php etc.) to automate regular processes in your day-to-day work process. Instead of doing it by hand, perhaps i could write you a script to do it automatically
I have recently been working on a mapping project using OpenStreetMap and have used Google Maps extensively in the past.
This is just a short idea of projects I could do for you.
]]>I have learnt to configure the cloud with terraform, then setup the tech stack with Ansible.
I have also learnt to Dockerise the entire system as a container
But creating Packer images seems like the most efficient method to build servers with all the tools above, which is then deployed as an image. I can still use Ansible to config it, with terraform to launch it.
]]>I have the source code on this gist
It is a demo of a number of features of using termwind to make the command line colorful.
]]>Firstly, i wanted to check what ports were open on the machine. https://vitux.com/find-open-ports-on-debian/ gives a overview of the options, but installing nmap is my favourite tool for this and running this command gives the output of all open ports.
apt-get install nmap
nmap -p- 127.0.0.1
Nmap provides the name of the commonly used port as the 'service', port 22 is ssh and it wasnt installed.
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-15 09:18 BST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000058s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
631/tcp open ipp
1716/tcp open xmsg
Nmap done: 1 IP address (1 host up) scanned in 10.15 seconds
Next, to install ssh I ran this command
sudo apt-get install openssh-server
then to check on the status
sudo systemctl status sshd
This allowed me to ssh into my machine and copy over a few files. Running the nmap command again and you can see it is now working.
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-15 09:35 BST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000031s latency).
Not shown: 65532 closed ports
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
1716/tcp open xmsg
Nmap done: 1 IP address (1 host up) scanned in 10.12 seconds
to enable / disable ssh i run these commands
systemctl enable ssh
systemctl disable ssh
Next, I will use Ansible to install Apache, PHP, Mysql/MariaDB and more tools ...
]]>node[~"^(website|twitter|facebook|contact:website|contact:twitter|contact:facebook)$"~"."] ({{bbox}});
out;
This provided 17406 features with at least 1 of the 3 links, with 202 for twitter and 260 for facebook. Here is the list of the full contact details https://wiki.openstreetmap.org/wiki/Key:contact
]]>Yesterday, I uploaded the demo map and got a buggy version of the OSM map working showing the data stored on a map. I wanted to explore what is actually stored in the system already so I got this query to extract multiple tags (website, twitter, facebook) in one overpass-turbo query.
node[~"^(website|twitter|facebook)$"~"."] ({{bbox}});
out;
This file was exported as .geojson and I then wrote a php script to extract those 3 elements and count them. using json machine
<?php
use \JsonMachine\JsonMachine;
require 'vendor/autoload.php';
$counter = 0;
$features = JsonMachine::fromFile('websiteTwitterFacebook.geojson');
foreach ($features as $key=>$value) {
if ($key=="features"){
foreach ($value as $feature){
echo $feature['properties']['@id'];
echo " ";
if(isset($feature['properties']['website'])) {
echo $feature['properties']['website'];
}
echo " ";
if(isset($feature['properties']['twitter'])) {
echo $feature['properties']['twitter'];
}
echo " ";
if(isset($feature['properties']['facebook'])) {
echo $feature['properties']['facebook'];
}
echo PHP_EOL;
$counter++;
}
}
}
echo "total: ". $counter;
?>
Total entries with any of the three elements was around 16,000, mainly with websites entries only. This data included some phone numbers and other things:
+44 20 3912 9400
+44 20 7584 1363
+44 20 7584 8918
+44 20 7589 0046
+44 20 7589 8851
+44 20 7730 0085
+44 20 7937 5033
+44 20 8743 0336
+44 20 8749 9977
155
400rabbits.co.uk/
HTTP://EVERESTSPICE.CO.UK
Henry Mayhew
IDMestates.com
La Cantina Sociale
Www.popnrest.com
htpps://www.lowerladysden.co.uk
http://countdown.tfl.gov.uk/#|searchTerm=blackstock road|stopCode=49525
https://docs.google.com/spreadsheets/d/12cLkm_p2ovnebNhQm4NmuTV9A6N6FZgg1SpUZ8phuq0/edit?usp=sharing
leicestersquare@allbarone.co.uk
But apart from these and a few similar ones i removed from the list, it was mainly websites.
]]>Openstreetmap tags / map features
I have created a demo page to show the map features from the list above. Try entering these in the search box on the demo page
What I would like to do is create 2 subpage templates which pull the query from the url
At first i would like to start with a simple site, but structure it, so it is easy to add features in the future.
I had Laravel in mind as I am used to developing with that, but not sure the best way to get this mapping included.
A modern CSS framwork which brings joy to the way webpages are designed. I have spent a lot of time working with both versions 1 and 2 of tailwind.
read more Tailwind CSS posts
I moved to using Debian Linux as my main computer 2 years ago, and have built my knowledge. As part of my focus on Devops, I have also experience with Centos, Ubuntu and other Linux systems (alongside Windows and Apple Mac)
read more Debian posts
read more Ansible posts read more Devops posts
read more Bash posts
read more Laravel posts
using - Laravel Dusk - for front end testing - PHPUnit - for code testing - Selenium - for website testing.
]]>update-alternatives is a lovely little tool to select which php version to use.
apt-get install lsb-release apt-transport-https ca-certificates
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/php.list
apt-get update
apt-cache search php 8
apt install -y php8.0-{mysql,cli,common,snmp,ldap,curl,mbstring,zip,tidy,xml,opcache}
update-alternatives --config php
php -v
However, this only worked at the command line, to get apache to work with php8.0 i also ran
apt-get install php8.0-fpm libapache2-mod-php8.0
a2dismod php7.4
a2enmod php8.0
systemctl restart apache2
and viola! the phpinfo() was now version 8.0.3!
However, a tested a Laravel 5 app and it errored with a blank screen. So had to update composer to version 2 and run a composer upgrade like so
composer self-update --2
composer upgrade
What an improvement in speed for composer! I had to update these lines in the composer json to
"php": "^7.4|^8.0",
"fzaninotto/faker": "^1.9.1",
"phpunit/phpunit": "^9.3"
I still have a laravel 5.5 app that i like to use, to downgrade back to php7.4
a2dismod php8.0
a2enmod php7.4
systemctl restart apache2
and to re-enable php 8.0
a2enmod php8.0
a2dismod php7.4
systemctl restart apache2
But there is a better way, edit the apache2-site-74.conf and add this to the virtual host conf
<FilesMatch \.php$>
# Apache 2.4.10+ can proxy to unix socket
SetHandler "proxy:unix:/var/run/php/php7.4-fpm.sock|fcgi://localhost/"
</FilesMatch>
and enable the proxy module
a2enmod proxy_fcgi
systemctl restart apache2
and viola! both versions work, php8 is the default, with php7.4 enabled for the site.
Perfect!
]]>After running
apt-get update
apt-get upgrade
OR
apt-get dist-upgrade
Regularly find that a lot of files are left behind. So i started a script to cleanup afterwards.
apt-get clean
apt-get autoclean
apt-get autoremove
apt-get purge
using docker, there is also space to reclain using (WARNING: adding the -a at the end deletes all of them!)
docker system prune
Interested to hear from everyone about how else they clean up space after installing and upgrading their computers...
]]>I have been meaning to use entr for a while see entr docs and this seemed perfect for the job.
#!/bin/bash
find /var/www/jigsaw-blog/source/_posts/ | entr sh -c './build.sh';
One problem i found was the max_user_watches was set too low, so running the following command as root, increases the number of file nodes this is allowed to watch. And it worked perfectly after that.
echo 100000 | tee /proc/sys/fs/inotify/max_user_watches
Another problem i found is that it breaks if you rename a file, as it monitors for changes, but it cannot deal with that.
This is my build.sh script. It runs the local build but can also run with these flags too:
--npm runs the laravel mix webpack build
--production generates the production build
It uses espeak to announce it is building (a bit like command and conquer!) and then xdotool to reload the firefox browser.
#!/bin/bash -e
# espeak "Building"
for arg in "$@"
do
if [ "$arg" == "--npm" ]
then
echo "Running npm run production"
espeak " running en pee em "
npm run production
fi
done
espeak "building Local"
./vendor/bin/jigsaw -vvv build
for arg in "$@"
do
if [ "$arg" == "--production" ]
then
espeak "building production"
./vendor/bin/jigsaw build production -vvv
fi
done
espeak "Finished"
# reload firefox in background
CURRENT_WID=$(xdotool getwindowfocus)
WID=$(xdotool search --name "Mozilla Firefox")
xdotool windowactivate $WID
xdotool key F5
# return to the editor (i prefer it to keep the browser at the front so comment this out!)
xdotool windowactivate $CURRENT_WID
]]>github.com/tailwindlabs/tailwindcss.com
I prefer to have apache serve all my pages locally, so i followed the instructions adding a
yarnpkg export
to create an html static site of the documentation into an 'out' directory which apache can serve, which all worked well (if a little slow to generate it all). But it all works locally, which i like.
As the Tailwind CSS project has grown and matured, a great Awesome TailwindCSS github page has grown and grown.
https://tail-animista.vercel.app/
]]>I have now been using and learning Ansible for a good few months and wanted to list all the resources I regularly use.
Stack Overflow questions tagged with Ansible
I really like learning videos in the evening to find out more about a topic. Jeff is a great Ansible guru and this playlist has taught me a lot. His books are also superb resources for learning about the world of Ansible.
]]>The idea of automating everything, having it act in an expected and repeatable way is superb. I have written bash scripts that try to do this in the past, but fall over at scale and repeatability. Ansible is great for this.
In the process of learning ansible I have an inventory with 1 control node and 4 machines
[x] localhost (debian 10) [Control node]
[x] virtual machine (centos 8)
[x] virtual machine (ubuntu 20)
[x] virtual machine (alpine linux 3.12)
[x] raspberry pi (raspbian)
With this as my personal cloud, i can connect with ansible via ssh and run a whole range of tasks including:
[x] install and configure a webserver, mysql and install a laravel github repo
[x] run a git pull to update the repo
[x] update the OS and software
[x] install a docker container on a server
[x] check the status of the machines
Ansible provides a number of Facts for a machine which can be seen with this command:
ansible localhost -m setup --tree /tmp/facts
it is a json file of around 600 lines of Facts including for my localhost machine:
"ansible_facts":
{
"ansible_os_family": "Debian",
"ansible_architecture": "x86_64",
"ansible_memtotal_mb": 7417,
}
There is also extensive details about hardrives, IP addresses, python versions, users and the computer in general.
Similarly, Ansible provides details of the services running on the machine:
ansible localhost -m service_facts --tree /tmp/services
it provides a 1000 line Json file, with details of the services running on the machine, here is an example of the result:
"ansible_facts":
{
"services":
"apache2":
{
"name": "apache2",
"source": "sysv",
"state": "running"
},
"apache2.service":
{
"name": "apache2.service",
"source": "systemd",
"state": "running",
"status": "enabled"
},
}
Also available is a list of package facts
ansible localhost -m package_facts --tree /tmp/package
This resulted in a json file with over 20000 lines, with details of everything installed on the machine, here is a section from the result:
"ansible_facts":
{
"packages":
{
"apache2": [
{
"arch": "amd64",
"category": "httpd",
"name": "apache2",
"origin": "Debian",
"source": "apt",
"version": "2.4.46-1"
}],
"zeal": [
{
"arch": "amd64",
"category": "doc",
"name": "zeal",
"origin": "",
"source": "apt",
"version": "1:0.6.1-1+b1"
}],
}
}
This provides a great deal of info about the machine, from which you could manage the the machines with conditionals like this.
- hosts: localhost
roles:
- role: debian_stock_config
when: ansible_facts['os_family'] == 'Debian'
One thing I really like about Ansible is it is pretty obvious what the code is doing, and logical in how it does it. All these commands in this first part are ad-hoc, as in they are single commands you run in the command line.
]]>The two tools i really liked and have found really useful are fzf
and ranger
.
fzf is a fuzzy search for the terminal. To search for docs in the directory which contain ansible run ls | fzf
and i can type ansible and it shows the files that contain ansible.
To search for ansible only use:
ls | fzf -e -q "ansible"
ranger
is a terminal file manager that is nice and quick and lets me navigate and view the file content.
both of these tools can be installed with
apt-get install fzf ranger
I am always on the lookout for more tools that improve and speed up the workflow.
]]>One thing i really like to use is offline tools to help me access info offline. However I am looking for info on a wide range of topics including devops, webdev, frontend, backend and linux in general. There is no one super tool which covers everything so here is a list of my favourite souces.
The man
command provides a great starting point to read the manpages for a command. eg. man ansible
gives a good intro, but it isnt the most readable or user friendly.
Finding what man apropos
pages are available is possible with apropos ansible
Alongside the command line tools above, gman
is a neat GUI tool to show an index of the man pages on the system.
tldr is similar to the man pages above, but gives examples of usage. tldr git
or tldr ansible
all provide a neat list of the most common commands you want to use. You have to update the database of info with the tldr --update
command
In a similar vain to tldr above, eg, provides even more examples of how to use a command, with a bit more detail.
Zeal is a GUI which allows one to download docs to read offline. It doesnt cover as much, for example it has the docs for Ansible, Docker and Laravel, but not Terraform
devdocs.io is an offline website which allows you to download the docs to read offline. It does cover Terraform, Ansible, Laravel and more. I use it with Chrome with a plugin
I would be interested in anymore tools people use for offline documentation, especially one that covers Tailwind CSS.
]]>I wanted a little script to import the values of the .env file to use in the bash scripts. This works as i expect.
#!/bin/bash
source /var/www/laravel/.env
echo $DB_USERNAME
This is a one liner to backup all the mysql databases.
mysql -u root -p -N -e 'show databases' | while read dbname; do mysqldump -u root -p --complete-insert --routines --triggers --single-transaction "$dbname" > "$dbname".sql; done
Working with the jigsaw static site builder, i wrote this simple script to copy the production version of the site to another directory and deploy it to the github pages site.
#!/bin/bash -e
echo "Deploying build_production to allotmentandy.github.io"
echo "---------"
cd build_production
cp -r * /var/www/allotmentandy.github.io
cd /var/www/allotmentandy.github.io
git add .
git commit -m "update"
git push
echo 'finished'
#!/bin/bash
echo "This a bash script to run composer clear the caches)"
echo "--------------"
php -v
#echo "--------------"
cd /var/www/laravel
php artisan down
composer dump-autoload -o
php artisan view:clear
php artisan config:clear
php artisan cache:clear
php artisan route:clear
php artisan clear-compiled
php artisan config:cache
php artisan optimize
php artisan up
]]>I was on twitter and I found this article about alternative career paths from freecodecamp and the Devops idea was one most appealing to me.
I have spent a lot of the last few months learning more and more about linux and the Command line. Including learning Bash Scripting and Python.
create a ssh key and copy it to the raspberry pi:
root@linuxtechi:~# ssh-keygen
root@linuxtechi:~# ssh-copy-id root@192.168.1.156
This allows you to ssh into the machine without a password.
Back in the days as an Apple Mac user, i really like Applescript. With these little scripts you could get stuff done quickly and easily. Laravel also provides a need way to write command line scripts which allow me to download and process the data.
Bash scripting is similar, i now have scripts to create new posts for this blog, build the css/js with webpack, deploy the site to github and many more.
To help, I have added a Bash Cheat cheet to this blog.
Ansible is a tool to software provisioning, configuration management, and application-deployment tool. wikipedia
opensource.com intro to ansible
I do like learning from videos and I found this series of videos Ansible with Jeff Geerling
Cheatsheets
github.com/germainlefebvre4/ansible-cheatsheet
armoucar.github.io/ansible-cheatsheet/
Now i can setup a hosts file like so:
[raspberrypi]
192.168.1.166 ansible_ssh_user=pi hostname=raspberrypi
Once i set up the ssh login (see above), i can now test the server connection with
ansible -i hosts -m ping raspberrypi
i have setup a new github repo which i run like this:
ansible-playbook site.yml -i hosts
With Laravel there is Homestead which provides an easy vagrant box with all the linux tools to run a website. I have also used Virtualbox for years to run debian boxes from around the time of Debian 6.
vagrant up
vagrant halt
vagrant ssh
Docker takes the whole Vagrant VM forward and makes an entire container which you can deploy as the finished website. I have used it before in my last contract role and found it very powerful.
Kickass Laravel Development with Docker - David McKay - Laracon EU 2018 Amsterdam
Jenkins is a Continuous Integration and testing environment which allows you to test your code as it is pushed to Github.
PHP UK Conference 2018 - Michael Heap - Zero to Jenkins: Automatic builds + deploys
I am installing Jenkins on my Raspberry Pi using this tutorial from https://pimylifeup.com/jenkins-raspberry-pi/
https://github.com/amochohan/laravel-jenkins-ci-boilerplate
To install jenkins add this to the playbook
tasks:
- name: ensure the jenkins apt repository key is installed
apt_key: url=https://pkg.jenkins.io/debian-stable/jenkins.io.key state=present
become: yes
- name: ensure the repository is configured
apt_repository: repo='deb https://pkg.jenkins.io/debian-stable binary/' state=present
become: yes
- name: ensure jenkins is installed
apt: name=jenkins update_cache=yes
become: yes
- name: ensure jenkins is running
service: name=jenkins state=started
]]>Firstly i discovered nextdoor.co.uk a localised website for helping people communicate with each other. The site excited me at first and I managed to post a number of messages about potential jobs/work. The first one about IT work got one response, I elderly gentleman who wanted help 'in the future' with setting up a laptop, security and more. However, he has never got back to me and I dont hold out much hope that he will. The second post was offering gardening work, something I can do and enjoy. This too had one response, from a lady nearby. I wondered round, to find a tiny garden, but alas the women had no money to pay for the work and was just trying to get me to do it as a portfolio piece. Hmm. Further posts have gotten me nowhere except I guy that asked for a day or twos labour, and then cancelled the day before as his cousin was coming and would do it for free. Great.
Yesterday, I emailed / contacted around 10 local gardening firms about gardening work and have got zero replies. Great.
Yesterday I also looked at finding potential work on Upwork.com A website for freelancers and companies to content for work. I have used this website before and also elance which they have taken over in the past. I preferred elance as it worked better without all the fancy javascript which doesn't quite work. Now though, Upwork has created a type of currency called 'connects' which you have to buy for the right to connect with potential jobs. I have used this site to hire programmers in the past, and the spam was very high.
So still trying to find work, any ideas?
]]>So I started to look for a Laravel starter project that i can work with.
I am going to install each one and write a blog on the experience of each.
I was looking for something simple with just tailwind and laravel so i also found this:
laravel frontend presets / tailwindcss
You start with a default laravel install and justs add the presets on top.
I have used this project in the past and found it great.
Not really a starter project, more of a solution of what i am trying to offer.
]]>But one thing I checked yesterday, was how many pages were indexed by Google.
Google search for "site:http://allotmentandy.github.io/" and the answer was NONE - ZERO - ZILCH - NADA!!
But Google provided a handy link to its Google Search Console. So i managed to login to my Google account, added a meta tag to the layout template and managed to get Google to index the site. Today there are 8 pages in the Google index. Which is better.
Small wins each day! Of those 8 pages, 3 only have 'AllotmentAndy' as the title, The blog index page has 'AllotmentAndy - GitHub Pages' as the title and the others have 'AllotmentAndy - PAGE TITLE'
Will see how this changes over the next few days.
Would like to improve the Title tags for the static pages in the root. I have added some yaml to the top of the web safe fonts page to improve the title tag. Lets see how long it takes Google to notice!
]]>Back in 1998, i registered a url, londinium.com and built a website directory. Over the years, i built quite a nice website, with over 100,000 links to mainly London based information. However in 2015, it started to be bought down daily by DDOS attacks. It was also removed from google searches and the revenue went down towards zero. So i shut it down.
Since then, i have tried with a shopify online store, but that hasnt worked.
So i come back to the idea of a London directory website, but want to make it more useful.
So i am going to look at Wordpress plugins to re-launch londinium.
These are the wordpress directories i have looked at, to be far i wasnt too impressed with any of them.
Geo-directory
Geo-directory on Wordpress
Business Directory
Business Directory on Wordpress
Pluginsware
Advanced Classifieds & Directory Pro on Wordpress
This looked the most impressive on first glimpse, but the demo content wouldnt import and after a few tries i gave up.
Hivepress
Hivepress on Wordpress
Web Directory Free
Web Directory Free on Wordpress
I would be interested in seeing websites that use any of these plugins and chat to users of these plugins, but it looks like I am going to have to build something myself.
]]>I use a number of RSS feed readers on my Debian machine and wanted to highlight what I like about each one.
My main RSS feed reader, i like it, but it is a little slow to open and close. One small problem I have is that when i open it, it is always set to "ALWAYS ON TOP". Everytime I open QuiteRSS, i have to untick the "ALWAYS ON TOP" setting in the right click. I cannot find how this has been set in the settings/options panels. Maybe this is a feature I discovered in another window manager, as i used LXDE earlier. Any ideas how to unset this?
All round, i like QuiteRSS, it doesnt have a progress box to show downloads, something I would like to see as i often work with sporadic internet connections.
There is an ability to search within one feed, but thats not how I need to search.
One really annoying feature is if you click on the line between the news items in a feed, it decides to open in a browser. This also tends to open a new browser window, but as i am offline, it just gives an error page.
Another strange feature is that many articles are marked as read, even thou i havent had a chance to read the feeds. I tend to go to the wireless hotspot, download all the feeds and go back to my desk to read them. But they are often marked read.
I use QuiteRSS as my main feed reader, but am open to new ways. So i tried the following apps, but none quite did it for me!
I like the way you can read the unread feeds across all the feeds in the app. This is how i have gone through the feeds, but it is a little unnerving when they disappear after you click on the next one. But i suppose this is now not unread!
I also liked the way there was a make folder option, QuiteRSS doesnt offer folders for the feeds, so they are all in one long list. The problem with Liferea is that any folder I make, doesn't actually appear.
After finding a number of OPML files (rss backups) on github, I wanted to see what these feeds offered, but didnt want to clog up QuiteRSS or Liferea with other peoples feeds, so i used Akregator.
Akregator was quite minimal, but let me import 5 opml files with ease and read the feeds.
However, after shutting down Akregator (or so i thought) It seems to keep a background process running which created errors as i am offline a lot.
So now i dont use Akregator much as that background process wasnt what i wanted.
I installed a Command Line (CLI) to test out what RSS feeds were like from the command line. I imported my main opml file to give it a whirl, but on reopening the app after the first feed update, all the feeds were gone. So i gave up on that one.
I would be interested in any solutions to issues i have written about here. You can contact me via twitter If you can share your feeds via .OPML files I would love to see what you read.
]]>There was a website a long time ago called the World Question Centre, where they asked famous people to ask one question. One of the respondents asked Why is music such a pleasure? I always loved this questions as I see it as perfect.
I have also used this question to test people. If I say this to you and you try to answer it, you have missed the point. The answer isnt in something in the ears and brain or any other reason. It is just a great question, it is not a test to answer. I am looking for people who say, What a great question.
IBM used to test people like this, if you were looking for work at IBM, they would take you to dinner to sillently test you. Once you got your food, if you put salt and pepper on before tasting it, you failed! They are looking for people who fit a high quality culture. I notice people putting on salt and pepper often and think of this.
There are a number of other questions i like to ask to test people and see what they are made of. For example, another question is, Where does money come from, how is it made?
This sort of testing people is to route out people who do not think and just live in a bubble of ignorance. I am not trying to say I am right here, just it is a waste of time to engage with people who cannot question what they think they know. How can these help you improve if they do not improve themselves.
]]>Today I started an article on RSS viewers that are easy to install with Debian. I found a cool command line app called newsboat, which i have been using today.
On a personal note, i was given a nickname last year, WOODY, which was the name of the 3rd version of Debian! Lovely...
]]>Tailwind CSS is much more like coding CSS / HTML about 20 years ago. But the difference being you can do soo much with a core set of standard css options.
The advantage is that you just write the html with classes that make sense to the context of what you are doing with the html element.
]]>I would like it to format the $something in camel case with the dashs removed by spaces.
Also, I could add a line to open it in sublime
#!/bin/bash
echo -n "New File Name:"
read something
echo "You Entered: $something"
FILE="source/_posts/$(date +%F)-$something.md"
touch $FILE
cat > $FILE << EOF
---
extends: _layouts.post
section: content
title: $something
date: $(date +%F)
description:
cover_image:
excerpt:
categories: []
---
EOF
]]>
{
body:
}
<a href='#'>link</a>
Jigsaw 1.3.27
Usage:
command [options] [arguments]
Options:
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Available commands:
build Build your site.
help Displays help for a command
init Scaffold a new Jigsaw project.
list Lists commands
serve Serve local site with php built-in server.
]]>I would like a language switcher to swap between EN and PT at first. This would change the nav, footer and buttons across the site. Perhaps there could be a posts_pt directory for tranlsated posts, which i would like to have auto-translated by some online gizmo (google)?
I wonder if this the way others have tackled this problem in the past.
Can javascript detect the locale? i need to search this next.
I have created an /about_pt page to experiment with this
https://lokalise.com/blog/localizing-apps-jquery/
]]>My idea is to output markdown pages for the 20,000 private jets in my laravel54 package. i think it will be easiest to make a laravel command to output it, then copy them to the jigsaw _posts directory to build.
Will it be able to handle it?
]]>I really like the search provided out of the box with the jigsaw blog template. but I want to add the about page to the search.
The collection in the config.php file needs to have the about page added.
]]>lunch was a smorgasbord of fresh salad with the nice focaccia.
Dinner was a large stir-fry of the first pak choi leaves with kale, chinese cabbage and mizuna.
It was all very nice
]]>It took me all morning to fix this website deploy issue, there was an old old repo that was setup years ago as an experiment.
I deleted that repo and now it is working as expected.
]]>Daring Fireball The most comprehensive guide to markdown I have seen
# Markdown syntax
https://guides.github.com/features/mastering-markdown/
**bold**
*italic*
Hello there `people`, how are you?
![Image of Angular](https://www.w3schools.com/angular/pic_angular.jpg)
[link to Google!](http://google.com)
> Hello there people
> How are you?
- [x] This is a complete item
- [ ] This is an incomplete item
Hello @damusix this is just some markdown examples!
emojis :sparkles: :camel: :boom:
https://www.webpagefx.com/tools/emoji-cheat-sheet/
# This is an `<h1>` tag
## This is an `<h2>` tag
###### This is an `<h6>` tag
*This text will be italic*
_This will also be italic_
**This text will be bold**
__This will also be bold__
_You **can** combine them_
###### Lists
* Item 1
* Item 2
* Item 2a
* Item 2b
1. Item 1
1. Item 2
1. Item 3
1. Item 3a
1. Item 3b
![An Image from this repo](/images/logo.png)
Format: ![Alt Text](url)
http://github.com - automatic!
[GitHub](http://github.com)
###### Checklists
- [x] @mentions, #refs, [links](google.com), **formatting**, and <del>del tags</del> supported
- [x] list syntax required (any unordered or ordered list supported)
- [x] this is a complete item
- [ ] this is an incomplete item
###### Tables
First Header | Second Header
------------ | -------------
Content from cell 1 | Content from cell 2
Content in the first column | Content in the second column
Any URL (like http://www.github.com/) will be automatically converted into a clickable link.
Any word wrapped with two tildes (like ~~this~~) will appear crossed out.
###### numbered lists:
1. One
2. Two
3. Three
###### bullet points:
* Start a line with a star
* Profit!
###### dashes for lists
- Dashes work just as well
- And if you have sub points, put two spaces before the dash or star:
- Like this
- And this
]]>There are a number of Laravel projects I am enjoying using at the moment
I am looking to create a dual language website in English and Portuguese to help me learn portuguese and as a playground to learn about internationalization and multi-language websites.
]]>This create 27 yeast blocks of around 0.6 ounces each. I left these drying overnight and today i will make bread and freeze a load of these.
]]>Working on templates for this static site. The mix of markdown and HTML isnt working.
[ ] Daily page - template for each day, with a shell script to create it
[ ] Code page - with examples of code blocks
[ ] Markdown Examples
[x] Build.sh - build both the local and production sites
[ ] Deploy to github.io
[ ] Create daily post page
]]>Having been online for 25 years, I have gone from an all rounder managing CSS, graphics, code and servers back to being an all rounder doing the same thing. In the intervening years, I have worked with PHP, Codeigniter and now mainly Laravel and have set up this system to allow me to entertain myself whilst (re)learning CSS, Javascript, Markdown, JSON, Bash Scripting and Graphic Design, something I have ignore for too long.
Having moved to using Debian Linux as my main daily machine, I have enjoyed setting up a number of projects but wanted somewhere to combine and publish my writing and experiments for all to see.
So although I had known about Jigsaw for years as a Laravel developer, i recently took the plunge and installed it. I have tried a number of other static site generators including Hugo and Jekyll but they didn't really click with me for a number of reasons. Although I have 'played' with Ruby, Go and Python, I tend to fall back to bash and PHP as the languages i like. I also wanted something that wasn't huge, but powerful enough for me get stuff done without a huge learning curve.
So Jigsaw uses the Laravel Blade syntax that I have used in many other projects and comes with a neat quick start blog with a nifty search facility from fuse.js which uses json. I created a site with a blog (_posts) and static pages (_pages), but the search facility only indexed the blog. No problem, i created a new collection in the generateIndex handle method and merged the 2 collections, then hey presto, it worked.
There are a number of features I liked 'out of the box' there was a RSS/ATOM feed included. This is great as I am a big fan of the RSS feed concept.
The basic app also had a category system which I have included via an autogenerated collection in the config file.
The Jigsaw blog system also uses Tailwind CSS, Webpack and Laravel Mix to
process the CSS and Js with the npm run dev
and npm run production
commands.
I have really quite enjoyed the experience and will publish this all on github.io and github.com/allotmentandy in the near future. Enjoying working with CSS and next for the imagery.
Playing with the SVG image format is also on the to-do list.
[ ] fuse.js search json generator only takes the first 255 chars of the page content.
[ ] the build process is very fast, taking about 0.2 seconds to build the site.
[ ] Tailwind CSS is really nice once you get your head around it, it is like CSS used to be.
[ ] I miss the 'php artisan' commands, i want a script to create a blog page
for today, with the date and time now, the filename i give it and a basic div.
like this php artisan make:post 'title of post'
but i will make it in bash
I have been using Debian Linux for years, I guess about version 5. Recently I found the 4.9 kernel caused the computer to lock / freeze completely, with an upgrade to the 4.19 kernel even worse. So I backed up my machine and installed Debian testing! This fixed the freezing issue and the computer has not crashed in weeks.
to show all installed apps in debian with a description
dpkg-query -l
Building this website using the jigsaw static site, but as default the search only worked for the _posts pages, so i added a new collection to the search json and merged them together. now you can search all the pages. Must look into the Fuse.js that is making it.
Have had a good day, learning Markdown, Tailwind CSS, Jigsaw Static Site development. Found some good docs on the internet to help me learn all this.
]]>