Blogs

  1. Favicon NixonMcInnes

    [Blog] NixonMcInnes: Designing a business for emergence and complexity

    Initiative map

    Here at NM HQ, we’re using a new approach to redesigning our business that I think is completely unique and possibly revolutionary.

    It’s based initiative and emergence, and perfectly suited to creating decentralised and purpose-driven businesses.

    It’s born out of the Source principles work of Peter Koenig, and has been created by NM associate and Very Clear Ideas founder Charles Davies. Two people who are proving very influential in our ongoing transformation.

    I want to share how this is working in case it’s of value to others.

    First a bit of back-filling

    We started life as a web design and build agency, became a marcomms agency then a consulting firm. Over time our team make-up changed a bit to meet our client’s needs, but the underlying structure ultimately stayed the same.

    We had a sales and marketing function, a consulting team, some finance/ops people and a leadership group. How we did business was always forward-thinking, but fundamentally we ran a fairly standard operating model.

    In the past year, part of my work has been to make the company more purposeful and explicitly values-lead. Our purpose is ‘developing purposeful organisations’ and two of our core values are fulfilment and autonomy.

    To put those at the heart of how we operate, we knew we had to look at how we structured job roles and the organisational design of the business.

    We’ve also been getting increasingly tired of the boom/bust consulting cycle, and thinking that given we’re aware of the increasing complexity and emergence in the world, we ought to find a better way of operating in it.

    Mapping the ideas in the company

    With the value of fulfilment front of mind, and initiated by Charlie, we went through everyone’s job roles and had a very honest conversation based on two simple questions:

    • What do you love about your job?
    • What do you hate about your job?

    We got them to strip anything they hated out of their work, so their roles were pared down only to what they found to be fulfilling.

    At the same time, we uncovered and mapped the real initiatives at play in the company: not the business functions and teams, but the projects or ideas that were brought into life through someone stepping up.

    We mapped these as they fitted into the company’s ultimate purpose, the initiative brought by the founder, Tom.

    From here we have given everyone complete autonomy to start their own initiatives, as long as they contribute to the company’s purpose of ‘developing purposeful organisations’. And we can see how it all adds up from the initiative map.

    Focusing on purpose to unleash creativity

    Initiative mapping is the process of uncovering the different initiatives at play in the business, and how they all contribute towards the organisation’s purpose.

    The biggest difference between this and the traditional approaches to organisational structure is that this isn’t about organising people. Instead of creating a series of boxes and job titles that then need to be populated by human beings, it’s just looking at what’s already there. We’re not creating anything new, we’re just looking at and making public everything that’s happening. This way we’re able to work with informal power structures rather than making everything neat and tidy to make a ‘functional machine’.

    The initiative map we’re using is a set of concentric circles that all have a purpose and a name attached. Each circle represents an initiative, started by a named individual in the company because they have a need they want to meet, and are doing so by bringing an idea to life in the company.

    Each initiative sits within another initiative, ultimately contributing to the source initiative that grounds the whole initiative.

    Each initiative is a space where the source of it has total authority and autonomy for how to lead, change or wind it down – as long as it doesn’t transgress the boundaries of the initiative it serves. To reduce the chances of that happening each initiative goes through a briefing process where the owner checks the purpose and scope of the initiative against the needs of the person who’s initiative they’re working into.

    And if it isn’t, they have the chance to change it, or we give them total autonomy and support to take that project or idea somewhere else – inside or outside of the company.

    This way, it keeps the whole organisation focused on the core purpose, without squashing people’s creative energy.

    What we end up with is an emerging map of the energy and authority in the company, that is totally dynamic and is always making sure everything we do is in line with the purpose and values of the company.

    And on the ground, we have a team of people only doing things that they’re really passionate about, and have taken full responsibility for.

    What makes this different

    This approach differs from most standard models for lots of reasons. Here are just a few.

    Initiatives have power, not people. Instead of giving individuals power for functions within the business, anyone can take the initiative at any time and have full authority for their initiative. But only for their initiative – for the need or purpose they bring.

    This is a map of the energy people are bringing, not a mechanistic design for the company. This puts aside any sense of what I as the MD, or Tom as the founder, thinks should be done in the company, and just allows for the right people to do what matters, when they feel it matters, always in service of purpose.

    While standard models and org charts are designed and fixed, this map is totally dynamic and responsive. It might change on a weekly basis. Initiatives naturally come to life, and run out of energy. New ones are started all the time, to respond to situational needs.

    So, in closing, it’s important to be explicit that this is a work in progress. Every day we are noticing new things, and reshaping our understanding of how initiatives work together. But, because this is about simply uncovering and working with ‘what is’ – that’s OK. We don’t have to hold on to a fixed idea of how things should be.

    That’s not to say it’s easy to let go of the mindset of design and control – although I’m one of the strongest proponents of working with emergence, especially when things get tough, I’m still pulled back towards wanting to impose a structure on the company. But if I did, that would just be me imposing my anxiety or stress onto a group of people, expecting everything to then sort itself out. And, as anyone who’s worked with people before, they aren’t robots and there’s no point treating them that way.

    I’m really interested to hear anyone’s feedback or similar experiences – what do you think?

    Contributor has not supplied alternative text for this image

    Posted 29 October 2014, 3:09 pm

  2. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: New web site launch for FretHub.com

    There’ve been times over the past couple of months when I was convinced that this web site had a will and that it didn’t want to be launched, but the new FretHub.com finally went live this past weekend. The site features integration with Amazon S3 ...

    Posted 29 October 2014, 12:00 am

  3. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: Jumpstart Interactive in the running for top national marketing awards

    Jumpstart Interactive (Jumpstart) has been shortlisted for two prestigious awards in the UK-based The Drum Network Awards. Jumpstart has been named a finalist in two highly-competitive categories: ‘Social Media Campaign/Strategy of the Year’ and ‘Food ...

    Posted 29 October 2014, 12:00 am

  4. Favicon SiteVisibility

    [Blog] SiteVisibility: Producing Professional Video – Tom Hickmore – Podcast Episode #268

    In this week’s Internet Marketing Podcast Andy talks to Tom Hickmore, Creative Director of Nice Media, about how to produce professional video content. They start by discussing how to film a professional talking head video, before moving on to utilising light most effectively. Tom then talks about what you should look for when buying a camera and the effectiveness of mobile phone cameras. He finishes by mentioning some top tips for creating videos.

    Post from Apple Pie & Custard blog by SiteVisibility - An SEO Agency

    Producing Professional Video – Tom Hickmore – Podcast Episode #268

    Contributor has not supplied alternative text for this image Contributor has not supplied alternative text for this image Contributor has not supplied alternative text for this image
    Contributor has not supplied alternative text for this image

    Posted 28 October 2014, 11:00 am

  5. Favicon martyn reding - juggling with water

    [Blog] martyn reding - juggling with water: Why don't we have arbejdsglaede?



    Some Scandinavian countries have a word "arbejdsglaede" to describe happiness at work.

    It's used to describe a sense of contentment and fulfilment from your chosen vocation.

    This word doesn't exist in English. We have a million ways to describe being drunk and even more ways to describe the weather. But this state of mental wellness at work isn't common enough to warrant a distinct word.

    Posted 28 October 2014, 8:54 am

  6. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: One week left to help digital artists Blast Theory kickstart 'Karen'

    Brighton-based Blast Theory have just one week left to raise to raise the final £4,300 needed to fund the development of Karen, an artistic app mixing games, storytelling and psychological profiling in a way that has never been done before. Can you ...

    Posted 28 October 2014, 12:00 am

  7. shardcore

    [Blog] shardcore: @algobola


    disclaimer

    Ebola is a serious business, people are dying. The best way to stop the spread of a disease is to contain it at source. There are many organisations actively involved in treating people in West Africa. Do your bit – lobby your politicians and shake them out of their apathy, or make a donation – I suggest the wonderful Médecins Sans Frontières as one such organisation worthy of your support.



    exposed

    Algobola is an investigation into social contagion.

    Algobola infects Twitter, it is passed through the exchange of ‘social media fluids’ – in this case, the use of @ mentions. It’s an experiment to see how far a ‘social virus’ can travel, and whether its presence can have any effect on behaviour.

    For the purposes of this experiment, I am patient zero – infectious to anyone I mention in my twitter feed (sorry friends). Once someone is exposed, they have a 50% chance of being infected. If they become infected, they are also contagious. There is a 30% chance of survival.

    Changes in infectious status are sent directly to the affected user in the form of a modified avatar image.

    1414251947.57SamanthaFaiers
    1414256405.08SamanthaFaiers
    1414278930.69SamanthaFaiers


    Here’s a chart of some test data:


    testdata_chart

    The number of infected people varies over time, depending on how promiscuous people are in their social network – to some extent it also reflects the day/night posting cycles of the infected population. This test had a 50/50 survival rate. The infection I’ve just started has only a 30% survival rate, so expect more death.

    Infectious processes like this suffer from a computational explosion – within a few days, millions of people are affected. (Due to the limitations of Twitter’s rate limits, I can only monitor a few hundred people an hour, so the disease is going to be self-limiting.)

    This work touches on two related ideas:

    Firstly, it looks at how we respond to incurable diseases like Ebola.

    In real terms, the experiment will infect a few thousand people – a drop in the ocean to 645,750,000 registered Twitter accounts. Indeed, this reflects the risk of contracting Ebola for those of us outside the currently infected areas. I’m sitting in Brighton, my chances of being exposed to Ebola at this point it time is effectively nil.

    But our response to outbreaks like Ebola reflect who we are, as a collective humanity. It makes us question how far our empathy extends, and how we share our skills and resources in a time of crisis. The only sane response is treatment and containment at source.

    However, human nature skews us towards conflating the risk of infection with the horror of actual disease. Because the disease is gruesome, horrific and arbitrary, we have a different kind of emotional response than we have towards real, but intangible threats, like global warming.


    tbanim2

    Secondly, it questions our apathy towards surveillance.

    Algobola works across the network. The pattern of infection reflects social behaviours – it exposes who communicates with whom. This method of infection shares similarities with modern surveillance techniques. The number of ‘hops’ between you and a ‘person of interest’ can determine whether you are subject to further investigation, and can possibly result in real limits to your freedom.

    Algobola explicitly exposes these kinds of connections, it shows how one random connection in your network may result in you being marked for ‘special attention’. Within a couple of hops the virus reaches thousands of people I’ve never met – when your government is ‘analysing your metadata’ the algorithms are working very much like a virus. Viruses are amoral, algorithms are much the same.

    Will the introduction of this virus have any effect on Twitter behaviour? I’m not sure, I’m taking a baseline reading of how many mentions-per-day the user makes before and after the infection, so check back here for the results.

    Posted 27 October 2014, 5:30 pm

  8. Favicon remy sharp's b:log

    [Blog] remy sharp's b:log: Motivation

    The world ain't all sunshine and rainbows. It's a very mean and nasty place and I don't care how tough you are it will beat you to your knees and keep you there permanently if you let it. You, me, or nobody is gonna hit as hard as life. But it ain't about how hard ya hit. It's about how hard you can get it and keep moving forward. How much you can take and keep moving forward.

    Behind the screen, behind the internet, I'm generally a bit of a depressive chap. I have been for many, many years. Going back to early childhood. I've not talked about it online before, and I'm not sure how much I will in the future.

    When I realise that I'm in a slump of depression, it's like a weight on my back and around my neck. I imagine Superman with a cloak of Kryptonite.

    It's shit. It's really shit. I know how I want to feel, I want to feel happy, grateful, I want to laugh and feel loved, yet I can't get there. It's shit that I can't.

    I can see myself wanting to be alone, retreating and wanting to hide from everything.

    That's when I need motivation. This is new for me. I've found motivation to move forward. To take what my depression has to give and tell myself (out loud) over and over that I will make it out of this feeling.

    I've recently found motivation from a few very specific things I've read and heard.

    The first was the quote from the start of this post. I heard two things in this speech (from Rocky Balboa no less):

    1. How my children are the world to me, and I'm there to help them get through the world and I have to be a strong model for them.
    2. Thanks to Julie (my wife), realising that this speech applies to me and my wife. Losing our daughter to stillbirth, we managed, somehow, to survive, and to stay strong.

    The second I came across after Robin Williams on 11-August 2014 took his own life:

    Depression lies.

    I'd never thought of it like that, but it does. I can be doing nothing, and a thought just pops into my head like: "...the reason you were hated at collage was...". But if I tell myself "depression lies", I realise that thought is utter bullshit. I've no idea what motivates my brain to produce real thoughts like that, but if I tell myself, out loud, "depression lies", I'm able to take a breath and brush the nastiness off.

    I read about this first on Will Wheaton's blog: depression lies and I found this post useful too.

    Finally, I watched Emma Watson's address to the UN. It fired something up inside of me. Something that I identified with and believe in. I intend to show my son and daughter the video when they're old enough to pay attention (currently 3 years and 5 months respectively, so they're a way off).

    I can't quite articulate what it is that makes me motivated to move forward in Watson's address, but I emplore you watch the video. It's 13 minutes. Incredibly inspiring and something I think all young and old should watch, boys in particular.


    For me, I need something to reach into my slump and lend it's hand to pull me up. These three things are helping me do that for me right now. I love my family so much, and I want them to feel loved by me.


    This post is first and foremost for me. When I feel shit again, I'll find this post again, read it, and remember that I can stand tall, and say: depression lies. Fuck you, depression.

    Reposted from The Pastry Box Project

    Contributor has not supplied alternative text for this image
    Contributor has not supplied alternative text for this image

    Posted 27 October 2014, 10:30 am

  9. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: Silicon Beach Training Launches AngularJS Training

    New for late 2014, Silicon Beach Training is now providing AngularJS Training as the latest addition to our extensive web design training portfolio. In two days you will learn how to create HTML5 apps using Google’s clientside MVC framework Angular ...

    Posted 27 October 2014, 12:00 am

  10. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: Orgatec 2014 – Posture People highlights

    Recently returned from his trip to Orgatec 2014, Posture People Director Dave Blood was absolutely buzzing this morning, telling us all about the wonderful ideas he’d picked up at the leading international trade fair for Modern Office and Facility design. ...

    Posted 27 October 2014, 12:00 am

  11. Writing For SEO

    [Blog] Writing For SEO: Essential On Page SEO Factors In 20 Slides

    I’ve condensed Are You Using These Six Essential On-Page Factors For SEO? into a Haiku Deck if you prefer looking at presentations to reading copy.

    ON-PAGE SEO FACTORS – Created with Haiku Deck, presentation software that inspires

    Here’s a summary of the slides’ content:

    1. ON-PAGE SEO FACTORS
    2. WRITE SUPREME CONTENT
    3. Others are already aiming for great
    4. Content must be:
      ◦ Informed and informative
      ◦ Grammatical, with real value
      ◦ Plenty to digest
      ◦ 100% original
    5. UNIQUE TITLE TAGS
    6. Don’t have the same title tag on all of your pages
    7. Title tag must be
      ◦ Relevant to your content
      ◦ Contains a key phrase
      ◦ Not your headline
    8. USE H1 AND H2 TAGS
    9. Help your readers and search engines
    10. h1 and h2 tags
      ◦ Grab attention
      ◦ Draw readers in
      ◦ Help structure content
      ◦ Make piece easier to understand
    11. INVOLVING DESCRIPTION TAGS
    12. Unique on each page
    13. Description tags
      ◦ Are not a ranking factor
      ◦ Can show in search results
      ◦ Encourage clicks
      ◦ Increase traffic
    14. HELPFUL ALT TEXT
    15. Make the invisible visible
    16. Why use alt tags?
      ◦ Google cannot see images
      ◦ Add text Google can read
      ◦ Images found in searches
      ◦ Use key phrases to help SEO
    17. USE SCHEMA.ORG
    18. Like meta tags 2.0. But more powerful
    19. schema.org
      ◦ Present information in a standard way
      ◦ Addresses, business descriptions, products
      ◦ Local SEO
      ◦ E-commerce sites
    20. CONTACT DAVID ROSAM
      DAVID@WRITINGFORSEO.ORG | +44 (0)1273 906607

    Please let me know what you think.

    Have you read these?

    Posted 24 October 2014, 11:12 am

  12. Favicon SiteVisibility

    [Blog] SiteVisibility: Panda and Penguin Updates – Gerry White – Internet Marketing Podcast #267.5

    In this week’s Internet Marketing Podcast Andy talks to Gerry White, Technical SEO Director at SiteVisibility about Google’s recent Panda and Penguin updates. Gerry goes through the differences between Panda and Penguin, and the purpose of each update. He then goes through the various tools that can be used to identify preventable issues with a site which Google may end up penalising. Finally he gives some advice on ensuring the content and meta data of a site is up to scratch and goes over Google’s other update, Hummingbird.

    Post from Apple Pie & Custard blog by SiteVisibility - An SEO Agency

    Panda and Penguin Updates – Gerry White – Internet Marketing Podcast #267.5

    Contributor has not supplied alternative text for this image Contributor has not supplied alternative text for this image Contributor has not supplied alternative text for this image
    Contributor has not supplied alternative text for this image

    Posted 24 October 2014, 11:00 am

  13. Favicon Wired Sussex Digital Media News

    [Blog] Wired Sussex Digital Media News: Salesforce launch Analytics Platform

    Salesforce 'Wave' is the latest piece of technology from Salesforce, but what does it do and who is it for? Come to the Brighton Salesforce Usergroup meetup on Thu 13th Nov to find out! Francis Pindar is coming hot from Dreamforce and will fill us ...

    Posted 24 October 2014, 1:00 am

  14. Favicon Paul Silver's blog

    [Blog] Paul Silver's blog: Setting up ColdFusion 11 and SQL Server Express 2014 on Windows 8

    Recently I installed Windows 8.1 in a virtual machine so I could set up IIS, ColdFusion (Developer version) and SQL Server (Express), which would match some of my client’s hosting well enough to use as a test environment.

    SQL Server Express and ColdFusion developer edition can be used for free by developers, which makes this a nice, low cost development environment.

    I hit big problems trying to get ColdFusion to talk to SQL Server Express, so I thought I ought to document the setup process for next time I tried and hit these problems. Sorry if you’re reading this and some of the notes are not detailed enough, I’ve set up ColdFusion and SQL Server enough times that the basics have stuck, if you need more help you might find it useful to search YouTube for help videos.

    Setting up SQL Server Express 2014

    Download SQL Server Express 2014 and running the installer. This all worked fine so just Google for wherever Microsoft are putting the installers now (which is a different place whenever I look, which is several years apart.) Try to find out if you’ve got a 32bit or 64bit version of Windows first, as you need to download the version which matches your Windows.

    Setting up IIS

    Go in to Windows settings > Control Panel > Programs > Turn Windows features on and off

    I’m not sure I needed all of these, but I ended up turning them on while trying to solve problems:

    Tick all of these (where nested, tick the ones inside the nest, not just to install everything):

    .Net framework 3.5
    .Net framework 4
    Within Internet Information Services:
    – Web Management Tools:
    – – IIS 6 Management Compatibility
    – – – IIS Metabase and IIS 6 configuration compatibility
    – – IIS Management Console
    – – IIS Management Service
    – World Wide Web Services:
    – – Application Development Features:
    – – – .Net Extensibility 3.5
    – – – .Net Extensibility 4.5
    – – – ASP.NET 3.5
    – – – ASP.NET 4
    – – – CGI
    – – – ISAPI Extensions
    – – – ISAPI Filters
    – – Common HTTP Features:
    – – – Default Document
    – – – Directory Browsing
    – – – HTTP Errors
    – – – HTTP Redirection
    – – – Static Content
    – – Health and Diagnostics:
    – – – HTTP Logging
    – – Performance Features:
    – – – Dynamic Content Compression
    – – – Static Content Compression
    – – Security:
    – – – Request Filtering

    Setting up ColdFusion 11

    Download from http://coldfusion.adobe.com

    Run the installer

    Choose the option to install a standalone web server, then, later in the install options you can choose to connect it up to IIS.

    Setting up a database user in SQL Server Express 2014

    In SQL Server Management Studio

    Create a database:

    Right click on Databases in the left column ‘Object Explorer’ > ‘New Database…’ and run through the short form

    Create a user:

    In left column ‘Object Explorer’, click on Security, right click on ‘Logins’ > ‘New Login…’

    Add a new user, e.g. ‘CFUser’

    Choose SQL Server authentication, give it a password.

    Uncheck ‘Enforce password policy’

    In the ‘Default Database’ drop down, change it to your new database

    On the left hand ‘Select a page’ click on ‘User Mapping’

    Tick the your new database, further down add them as a type of user to the database – ‘db_datareader’ & ‘db_datawriter’

    Configuring Windows Firewall to allow access to SQL Server

    As per these instructions from Microsoft I ran WF.msc then set up an Inbound Rule to allow TCP on port 1433 for local use.

    Configuring security to allow ColdFusion to get data from SQL Server Express 2014

    Apparently by default, SQL Server Express doesn’t allow remote connections, but configuring it to allow a remote connection so ColdFusion could get data from it was very hard, as the 2014 version of SQL Server Express is more locked down than previous versions. I wouldn’t have got it working without this Stackoverflow question about SQL Server Express 2012.

    Open ‘SQL Server Configuration Manager’ (by searching for ‘SQL Server configuration’ on the Start screen.)

    Under ‘SQL Server Network Configuration’ > ‘Protocols for SQLEXPRESS':

    Change ‘Named Pipes’ to ‘Enabled’ (by right clicking) (I’m not sure this step is necessary, as I found it in a bit of advice while I was still trying to get everything working.)

    Change ‘TCP/IP’ to ‘Enabled’, then right click again and choose ‘Properties’

    Under ‘IP2′ set the IP address to be that of the computer’s IP address on the local subnet (I found this out by running ‘netstat -a’ on the command line and looking down for port 1433 while I was trying something else, I’m sure there’s an easier way of finding it.)

    Scroll down to the settings for IPAII.

    Make sure ‘TCP Dynamic Ports’ is blank (not the 5 digit number that mine had in there by default.)

    Make sure the ‘TCP Port’ is set to ‘1433’ (mine was blank by default.)

    You may also need to go to ‘Services’ (by searching for it in Windows) and turning on the SQL Server Browser service (and setting it to run automatically) – I already had mine turned on during other debugging, I’ve read different advice on whether it should be on or off.

    Some of the settings for SQL Server don’t take until you’ve re-started the SQL Server service. I think in the end I restarted Windows to be sure things were going to take long-term.

    After all of this, I was able to go in to ColdFusion administrator and successfully set up a datasource using the database user I’d set up. Just getting SQL Server and ColdFusion to talk to each other was 3-4 hours of messing about with my settings, hence writing up these notes to make it easier next time.

    Posted 23 October 2014, 3:51 pm

  15. Favicon Adactio: Journal

    [Blog] Adactio: Journal: Be progressive

    Aaron wrote a great post a little while back called A Fundamental Disconnect. In it, he points to a worldview amongst many modern web developers, who see JavaScript as a universally-available technology in web browsers. They are, in effect, viewing a browser’s JavaScript engine as a runtime environment, and treating web development no different to any other kind of software development.

    The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.

    Treating JavaScript support in “the browser” as a known quantity is as much of a consensual hallucination as deciding that all viewports are 960 pixels wide. Even that phrasing—“the browser”—shows a framing that’s at odds with the reality of developing for the web; we don’t have to think about “the browser”, we have to think about browsers:

    Lakoffian self-correction: if I’m about to talk about doing something “in the browser”, I try to catch myself and say “in browsers” instead.

    While we might like think that browsers have all reached a certain level of equilibrium, as Aaron puts it “the Web is messy”:

    And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.

    Please don’t think that either Aaron or I are saying that you shouldn’t use JavaScript. Far from it! It’s simply a matter of how you wield the power of JavaScript. If you make your core tasks dependent on JavaScript, some of your potential users will inevitably be left out in the cold. But if you start by building on a classic server/client model, and then enhance with JavaScript, you can have your cake and eat it too. Modern browsers get a smooth, rich experience. Older browsers get a clunky experience with full page refreshes, but that’s still much, much better than giving them nothing at all.

    Aaron makes the case that, while we cannot control which browsers people will use, we can control the server environment.

    Stuart takes issue with that assertion in a post called Fundamentally Disconnected. In it, he points out that the server isn’t quite the controlled environment that Aaron claims:

    Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue.

    It’s true enough that the server isn’t some rock-solid never-changing environment. Anyone who’s ever had to do install patches or update programming languages knows this. But at least it’s one single environment …whereas the web has an overwhelming multitude of environments; one for every browser/OS/device combination.

    Stuart finishes on a stirring note:

    The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed.

    However he wraps up by saying that…

    …the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

    In a post called Missed Connections, Aaron pushes back against that last point:

    The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.

    While JavaScript may technically be available and consistently-implemented across most devices used to access our sites nowadays, we do not control how, when, or even if that JavaScript is ultimately executed.

    Stuart responds in a post called Reconnecting (and, by the way, how great is it to see this kind of thoughtful blog-to-blog discussion going on?).

    I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work.

    But here’s the problem with progressively enhancing from server functionality to a rich client:

    A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment.

    Good point.

    Now, at this juncture, I could point out that—by using progressive enhancement—you can still have the best of both worlds. Stuart has anticpated that:

    It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps.

    Ah, there’s the rub!

    When I’ve extolled the virtues of progressive enhancement in the past, the pushback I most often receive is on this point. Surely it’s wasteful to build something that works on the server and then reimplement much of it on the client?

    Personally, I try not to completely reinvent all the business logic that I’ve already figured out on the server, and then rewrite it all in JavaScript. I prefer to use JavaScript—and specifically Ajax—as a dumb waiter, shuffling data back and forth between the client and server, where the real complexity lies.

    I also think that building in this way will take longer …at first. But then on the next project, it takes less time. And on the project after that, it takes less time again. From that perspective, it’s similar to switching from tables for layout to using CSS, or switching from building fixed-with sites to responsive design: the initial learning curve is steep, but then it gets easier over time, until it simply becomes normal.

    But fundamentally, Stuart is right. Developers don’t like to violate the DRY principle: Don’t Repeat Yourself. Writing code for the server environment, and then writing very similar code for the browser—I mean browsers—is a bad code smell.

    Here’s the harsh truth: building websites with progressive enhancement is not convenient.

    Building a client-side web thang that requires JavaScript to work is convenient, especially if you’re using a framework like Angular or Ember. In fact, that’s the main selling point of those frameworks: developer convenience.

    The trade-off is that to get that level of developer convenience, you have to sacrifice the universal reach that the web provides, and limit your audience to the browsers that can run a pre-determined level of JavaScript. Many developers are quite willing to make that trade-off.

    Developer convenience is a very powerful and important force. I wish that progressive enhancement could provide the same level of developer convenience offered by Angular and Ember, but right now, it doesn’t. Instead, its benefits are focused on the end user, often at the expense of the developer.

    Personally, I’m willing to take that hit. I’ve always maintained that, given the choice between making something my problem, and making something the user’s problem, I’ll choose to make it my problem every time. But I absolutely understand the mindset of developers who choose otherwise.

    But perhaps there’s a way to cut this Gordian knot. What if you didn’t need to write your code twice? What if you could write code for the server and then run the very same code on the client?

    This is the promise of isomorphic JavaScript. It’s a terrible name for a great idea.

    For me, this is the most exciting aspect of Node.js:

    With Node.js, a fast, stable server-side JavaScript runtime, we can now make this dream a reality. By creating the appropriate abstractions, we can write our application logic such that it runs on both the server and the client — the definition of isomorphic JavaScript.

    Some big players are looking into this idea. It’s the thinking behind AirBnB’s Rendr.

    Interestingly, the reason why many large sites are investigating this approach isn’t about universal access—quite often they have separate siloed sites for different device classes. Instead it’s about performance. The problem with having all of your functionality wrapped up in JavaScript on the client is that, until all of that JavaScript has loaded, the user gets absolutely nothing. Compare that to rendering an HTML document sent from the server, and the perceived performance difference is very noticable.

    “We don’t have any non-JavaScript users” No, all your users are non-JS while they’re downloading your JS

    — Jake Archibald (@jaffathecake) May 28, 2012

    Here’s the ideal situation:

    1. A browser requests a URL.
    2. The server sends HTML, which renders quickly, along with with some mustard-cutting JavaScript.
    3. If the browser doesn’t cut the mustard, or JavaScript fails, fall back to full page refreshes.
    4. If the browser does cut the mustard, keep all the interaction in the client, just like a single page app.

    With Node.js on the server, and JavaScript in the client, steps 3 and 4 could theoretically use the same code.

    So why aren’t we seeing more of these holy-grail apps that achieve progressive enhancement without code duplication?

    Well, partly it’s back to that question of controlling the server environment.

    @sil @adactio @dracos That architecture is a hard choice to make because it ties you to a small set of runtimes on the server.

    — Dan Webb (@danwrong) September 22, 2014

    @sil @adactio @dracos plus, I think there’s something positive about hard separation of client and server code. Gets you thinking right.

    — Dan Webb (@danwrong) September 22, 2014

    This is something that Nicholas Zakas tackled a year ago when he wrote about Node.js and the new web front-end. He proposes a third layer that sits between the business logic and the rendered output. By applying the idea of isomorphic JavaScript, this interface layer could be run on the server (as Node.js) or on the client (as JavaScript), while still allowing you to have the rest of your server environment running whatever programming language works for you.

    It’s still early days for this kind of thinking, and there are lots of stumbling blocks—trying to write JavaScript that can be executed on both the server and the client isn’t so easy. But I’m pretty excited about where this could lead. I love the idea of building in a way that provide the performance and universal access of progressive enhancement, while also providing the developer convenience of JavaScript frameworks.

    In the meantime, building with progressive enhancement may have to involve a certain level of inconvenience and duplication of effort. It’s a price I’m willing to pay, but I wish I didn’t have to. And I totally understand that others aren’t willing to pay that price.

    But while the mood might currently seem to be in favour of using monolithic JavaScript frameworks to build client-side apps that rely on JavaScript in browsers, I think that the tide might change if we started to see poster children for progressive enhancement.

    Three years ago, when I was trying to convince clients and fellow developers that responsive design was the way to go, it was a hard sell. It reminded me of trying to sell the benefits of using web standards instead of using tables for layout. Then, just as the Doug’s redesign of Wired and Mike’s redesign of ESPN helped sell the idea of CSS for layout, the Filament Group’s work on the Boston Globe made it a lot easier to sell the idea of responsive design. Then Paravel designed a responsive Microsoft homepage and the floodgates opened.

    Now …who wants to do the same thing for progressive enhancement?

    Posted 23 October 2014, 2:54 pm

Flickr

These photos are the most recent added to the BNM Flickr Photo pool.

[Flickr]

Photo uploaded by , on

Recent Threads

This list of subject headings is generated from the last 50 posts made to the BNM mailing list which also had a response.

  1. Urgent Need with BMS/NJ 3 posts.
  2. HOT LIST 3 posts.
  3. Urgent Recuritment for... 2 posts.
  4. R2R (Record to Report)... 2 posts.
  5. LOAD RUNNER TESTING URGENT 2 posts.
  6. business analyst EXP;10... 2 posts.

Last.fm artist chart

This is a chart of the most listened to artists in the BNM last.fm group. Chart for the week ending Sun, 26 Oct 2014.

  1. Royal Blood
  2. Caribou
  3. The Black Keys
  4. Daft Punk
  5. Flying Lotus
  6. St. Vincent
  7. Bob Dylan
  8. The Kinks
  9. Johnny Cash
  10. Weezer

Chart updated every Sunday.

del.icio.us

These are links tagged by members of the BNM mailing list with the tag ‘bnm’. If you find something you think other readers may find useful, why not do the same?

Events

Events are taken from the BNM Upcoming Group. There are currently no events to display.

You can download, or subscribe to this schedule.